HADOOP-11843. Make setting up the build environment easier. Contributed by Niels Basjes.

This commit is contained in:
cnauroth 2015-04-24 13:05:18 -07:00
parent d03dcb9635
commit 80935268f5
5 changed files with 276 additions and 1 deletions

View File

@ -15,6 +15,43 @@ Requirements:
* Linux FUSE (Filesystem in Userspace) version 2.6 or above ( if compiling fuse_dfs )
* Internet connection for first build (to fetch all Maven and Hadoop dependencies)
----------------------------------------------------------------------------------
The easiest way to get an environment with all the appropriate tools is by means
of the provided Docker config.
This requires a recent version of docker ( 1.4.1 and higher are known to work ).
On Linux:
Install Docker and run this command:
$ ./start-build-env.sh
On Mac:
First make sure Homebrew has been installed ( http://brew.sh/ )
$ brew install docker boot2docker
$ boot2docker init -m 4096
$ boot2docker start
$ $(boot2docker shellinit)
$ ./start-build-env.sh
The prompt which is then presented is located at a mounted version of the source tree
and all required tools for testing and building have been installed and configured.
Note that from within this docker environment you ONLY have access to the Hadoop source
tree from where you started. So if you need to run
dev-support/test-patch.sh /path/to/my.patch
then the patch must be placed inside the hadoop source tree.
Known issues:
- On Mac with Boot2Docker the performance on the mounted directory is currently extremely slow.
This is a known problem related to boot2docker on the Mac.
See:
https://github.com/boot2docker/boot2docker/issues/593
This issue has been resolved as a duplicate, and they point to a new feature for utilizing NFS mounts
as the proposed solution:
https://github.com/boot2docker/boot2docker/issues/64
An alternative solution to this problem is when you install Linux native inside a virtual machine
and run your IDE and Docker etc in side that VM.
----------------------------------------------------------------------------------
Installing required packages for clean install of Ubuntu 14.04 LTS Desktop:
@ -29,7 +66,7 @@ Installing required packages for clean install of Ubuntu 14.04 LTS Desktop:
* Native libraries
$ sudo apt-get -y install build-essential autoconf automake libtool cmake zlib1g-dev pkg-config libssl-dev
* ProtocolBuffer 2.5.0 (required)
$ sudo apt-get -y install libprotobuf-dev protobuf-compiler
$ sudo apt-get -y install protobuf-compiler
Optional packages:

View File

@ -0,0 +1,67 @@
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Dockerfile for installing the necessary dependencies for building Hadoop.
# See BUILDING.txt.
# FROM dockerfile/java:openjdk-7-jdk
FROM dockerfile/java:oracle-java7
WORKDIR /root
# Install dependencies from packages
RUN apt-get update && apt-get install --no-install-recommends -y \
git curl ant make maven \
cmake gcc g++ protobuf-compiler \
build-essential libtool \
zlib1g-dev pkg-config libssl-dev \
snappy libsnappy-dev \
bzip2 libbz2-dev \
libjansson-dev \
fuse libfuse-dev \
libcurl4-openssl-dev \
python python2.7
# Install Forrest
RUN mkdir -p /usr/local/apache-forrest ; \
curl -O http://archive.apache.org/dist/forrest/0.8/apache-forrest-0.8.tar.gz ; \
tar xzf *forrest* --strip-components 1 -C /usr/local/apache-forrest ; \
echo 'forrest.home=/usr/local/apache-forrest' > build.properties
# Install findbugs
RUN mkdir -p /opt/findbugs && \
wget http://sourceforge.net/projects/findbugs/files/findbugs/3.0.1/findbugs-noUpdateChecks-3.0.1.tar.gz/download \
-O /opt/findbugs.tar.gz && \
tar xzf /opt/findbugs.tar.gz --strip-components 1 -C /opt/findbugs
ENV FINDBUGS_HOME /opt/findbugs
# Install shellcheck
RUN apt-get install -y cabal-install
RUN cabal update && cabal install shellcheck --global
# Fixing the Apache commons / Maven dependency problem under Ubuntu:
# See http://wiki.apache.org/commons/VfsProblems
RUN cd /usr/share/maven/lib && ln -s ../../java/commons-lang.jar .
# Avoid out of memory errors in builds
ENV MAVEN_OPTS -Xms256m -Xmx512m
# Add a welcome message and environment checks.
ADD hadoop_env_checks.sh /root/hadoop_env_checks.sh
RUN chmod 755 /root/hadoop_env_checks.sh
RUN echo '~/hadoop_env_checks.sh' >> /root/.bashrc

View File

@ -0,0 +1,118 @@
#!/bin/bash
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# -------------------------------------------------------
function showWelcome {
cat <<Welcome-message
_ _ _ ______
| | | | | | | _ \\
| |_| | __ _ __| | ___ ___ _ __ | | | |_____ __
| _ |/ _\` |/ _\` |/ _ \\ / _ \\| '_ \\ | | | / _ \\ \\ / /
| | | | (_| | (_| | (_) | (_) | |_) | | |/ / __/\\ V /
\\_| |_/\\__,_|\\__,_|\\___/ \\___/| .__/ |___/ \\___| \\_(_)
| |
|_|
This is the standard Hadoop Developer build environment.
This has all the right tools installed required to build
Hadoop from source.
Welcome-message
}
# -------------------------------------------------------
function showAbort {
cat <<Abort-message
___ _ _ _
/ _ \\| | | | (_)
/ /_\\ \\ |__ ___ _ __| |_ _ _ __ __ _
| _ | '_ \\ / _ \\| '__| __| | '_ \\ / _\` |
| | | | |_) | (_) | | | |_| | | | | (_| |
\\_| |_/_.__/ \\___/|_| \\__|_|_| |_|\\__, |
__/ |
|___/
Abort-message
}
# -------------------------------------------------------
function failIfUserIsRoot {
if [ "$(id -u)" -eq "0" ]; # If you are root then something went wrong.
then
cat <<End-of-message
Apparently you are inside this docker container as the user root.
Putting it simply:
This should not occur.
Known possible causes of this are:
1) Running this script as the root user ( Just don't )
2) Running an old docker version ( upgrade to 1.4.1 or higher )
End-of-message
showAbort
logout
fi
}
# -------------------------------------------------------
# Configurable low water mark in GiB
MINIMAL_MEMORY_GiB=2
function warnIfLowMemory {
MINIMAL_MEMORY=$((MINIMAL_MEMORY_GiB*1024*1024)) # Convert to KiB
INSTALLED_MEMORY=$(fgrep MemTotal /proc/meminfo | awk '{print $2}')
if [ $((INSTALLED_MEMORY)) -le $((MINIMAL_MEMORY)) ];
then
cat <<End-of-message
_ ___ ___
| | | \\/ |
| | _____ __ | . . | ___ _ __ ___ ___ _ __ _ _
| | / _ \\ \\ /\\ / / | |\\/| |/ _ \\ '_ \` _ \\ / _ \\| '__| | | |
| |___| (_) \\ V V / | | | | __/ | | | | | (_) | | | |_| |
\\_____/\\___/ \\_/\\_/ \\_| |_/\\___|_| |_| |_|\\___/|_| \\__, |
__/ |
|___/
Your system is running on very little memory.
This means it may work but it wil most likely be slower than needed.
If you are running this via boot2docker you can simply increase
the available memory to atleast ${MINIMAL_MEMORY_GiB} GiB (you have $((INSTALLED_MEMORY/(1024*1024))) GiB )
End-of-message
fi
}
# -------------------------------------------------------
showWelcome
warnIfLowMemory
failIfUserIsRoot
# -------------------------------------------------------

View File

@ -465,6 +465,9 @@ Release 2.8.0 - UNRELEASED
HADOOP-9477. Add posixGroups support for LDAP groups mapping service.
(Dapeng Sun via Yongjun Zhang)
HADOOP-11843. Make setting up the build environment easier.
(Niels Basjes via cnauroth)
IMPROVEMENTS
HADOOP-11719. [Fsshell] Remove bin/hadoop reference from

50
start-build-env.sh Normal file
View File

@ -0,0 +1,50 @@
#!/bin/bash
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
set -e # exit on error
cd "$(dirname "$0")" # connect to root
docker build -t hadoop-build dev-support/docker
if [ "$(uname -s)" == "Linux" ]; then
USER_NAME=${SUDO_USER:=$USER}
USER_ID=$(id -u "${USER_NAME}")
GROUP_ID=$(id -g "${USER_NAME}")
else # boot2docker uid and gid
USER_NAME=$USER
USER_ID=1000
GROUP_ID=50
fi
docker build -t "hadoop-build-${USER_NAME}" - <<UserSpecificDocker
FROM hadoop-build
RUN groupadd --non-unique -g ${GROUP_ID} ${USER_NAME}
RUN useradd -g ${GROUP_ID} -u ${USER_ID} -k /root -m ${USER_NAME}
ENV HOME /home/${USER_NAME}
UserSpecificDocker
# By mapping the .m2 directory you can do an mvn install from
# within the container and use the result on your normal
# system. And this also is a significant speedup in subsequent
# builds because the dependencies are downloaded only once.
docker run --rm=true -t -i \
-v "${PWD}:/home/${USER_NAME}/hadoop" \
-w "/home/${USER_NAME}/hadoop" \
-v "${HOME}/.m2:/home/${USER_NAME}/.m2" \
-u "${USER_NAME}" \
"hadoop-build-${USER_NAME}"