HBASE-16215 clean up of ref guide and site for EOM versions.

Signed-off-by: Enis Soztutar <enis@apache.org>
Signed-off-by: Michael Stack <stack@apache.org>
This commit is contained in:
Sean Busbey 2017-04-12 09:04:44 -05:00
parent d15f75b3cf
commit a8e6f33791
6 changed files with 44 additions and 203 deletions

View File

@ -62,12 +62,11 @@ Any -1 on a patch by anyone vetoes a patch; it cannot be committed until the jus
.How to set fix version in JIRA on issue resolve
Here is how link:http://search-hadoop.com/m/azemIi5RCJ1[we agreed] to set versions in JIRA when we resolve an issue.
If master is going to be 0.98.0 then:
If master is going to be 2.0.0, and branch-1 1.4.0 then:
* Commit only to master: Mark with 0.98
* Commit to 0.95 and master: Mark with 0.98, and 0.95.x
* Commit to 0.94.x and 0.95, and master: Mark with 0.98, 0.95.x, and 0.94.x
* Commit to 89-fb: Mark with 89-fb.
* Commit only to master: Mark with 2.0.0
* Commit to branch-1 and master: Mark with 2.0.0, and 1.4.0
* Commit to branch-1.3, branch-1, and master: Mark with 2.0.0, 1.4.0, and 1.3.x
* Commit site fixes: no version
[[hbase.when.to.close.jira]]

View File

@ -93,54 +93,34 @@ This section lists required services and some required system configuration.
[[java]]
.Java
[cols="1,1,1,4", options="header"]
[cols="1,1,4", options="header"]
|===
|HBase Version
|JDK 6
|JDK 7
|JDK 8
|2.0
|link:http://search-hadoop.com/m/DHED4Zlz0R1[Not Supported]
|link:http://search-hadoop.com/m/YGbbsPxZ723m3as[Not Supported]
|yes
|1.3
|link:http://search-hadoop.com/m/DHED4Zlz0R1[Not Supported]
|yes
|yes
|1.2
|link:http://search-hadoop.com/m/DHED4Zlz0R1[Not Supported]
|yes
|yes
|1.1
|link:http://search-hadoop.com/m/DHED4Zlz0R1[Not Supported]
|yes
|Running with JDK 8 will work but is not well tested.
|1.0
|link:http://search-hadoop.com/m/DHED4Zlz0R1[Not Supported]
|yes
|Running with JDK 8 will work but is not well tested.
|0.98
|yes
|yes
|Running with JDK 8 works but is not well tested. Building with JDK 8 would require removal of the
deprecated `remove()` method of the `PoolMap` class and is under consideration. See
link:https://issues.apache.org/jira/browse/HBASE-7608[HBASE-7608] for more information about JDK 8
support.
|0.94
|yes
|yes
|N/A
|===
NOTE: In HBase 0.98.5 and newer, you must set `JAVA_HOME` on each node of your cluster. _hbase-env.sh_ provides a handy mechanism to do this.
NOTE: HBase will neither build nor compile with Java 6.
NOTE: You must set `JAVA_HOME` on each node of your cluster. _hbase-env.sh_ provides a handy mechanism to do this.
[[os]]
.Operating System Utilities
@ -213,8 +193,8 @@ See link:http://wiki.apache.org/hadoop/Distributions%20and%20Commercial%20Suppor
[TIP]
====
Hadoop 2.x is faster and includes features, such as short-circuit reads, which will help improve your HBase random read profile.
Hadoop 2.x also includes important bug fixes that will improve your overall HBase experience.
HBase 0.98 drops support for Hadoop 1.0, deprecates use of Hadoop 1.1+, and HBase 1.0 will not support Hadoop 1.x.
Hadoop 2.x also includes important bug fixes that will improve your overall HBase experience. HBase does not support running with
earlier versions of Hadoop. See the table below for requirements specific to different HBase versions.
Hadoop 3.x is still in early access releases and has not yet been sufficiently tested by the HBase community for production use cases.
====
@ -227,24 +207,21 @@ Use the following legend to interpret this table:
* "X" = not supported
* "NT" = Not tested
[cols="1,1,1,1,1,1,1,1", options="header"]
[cols="1,1,1,1,1", options="header"]
|===
| | HBase-0.94.x | HBase-0.98.x (Support for Hadoop 1.1+ is deprecated.) | HBase-1.0.x (Hadoop 1.x is NOT supported) | HBase-1.1.x | HBase-1.2.x | HBase-1.3.x | HBase-2.0.x
|Hadoop-1.0.x | X | X | X | X | X | X | X
|Hadoop-1.1.x | S | NT | X | X | X | X | X
|Hadoop-0.23.x | S | X | X | X | X | X | X
|Hadoop-2.0.x-alpha | NT | X | X | X | X | X | X
|Hadoop-2.1.0-beta | NT | X | X | X | X | X | X
|Hadoop-2.2.0 | NT | S | NT | NT | X | X | X
|Hadoop-2.3.x | NT | S | NT | NT | X | X | X
|Hadoop-2.4.x | NT | S | S | S | S | S | X
|Hadoop-2.5.x | NT | S | S | S | S | S | X
|Hadoop-2.6.0 | X | X | X | X | X | X | X
|Hadoop-2.6.1+ | NT | NT | NT | NT | S | S | S
|Hadoop-2.7.0 | X | X | X | X | X | X | X
|Hadoop-2.7.1+ | NT | NT | NT | NT | S | S | S
|Hadoop-2.8.0 | X | X | X | X | X | X | X
|Hadoop-3.0.0-alphax | NT | NT | NT | NT | NT | NT | NT
| | HBase-1.1.x | HBase-1.2.x | HBase-1.3.x | HBase-2.0.x
|Hadoop-2.0.x-alpha | X | X | X | X
|Hadoop-2.1.0-beta | X | X | X | X
|Hadoop-2.2.0 | NT | X | X | X
|Hadoop-2.3.x | NT | X | X | X
|Hadoop-2.4.x | S | S | S | X
|Hadoop-2.5.x | S | S | S | X
|Hadoop-2.6.0 | X | X | X | X
|Hadoop-2.6.1+ | NT | S | S | S
|Hadoop-2.7.0 | X | X | X | X
|Hadoop-2.7.1+ | NT | S | S | S
|Hadoop-2.8.0 | X | X | X | X
|Hadoop-3.0.0-alphax | NT | NT | NT | NT
|===
.Hadoop Pre-2.6.1 and JDK 1.8 Kerberos
@ -288,88 +265,6 @@ Make sure you replace the jar in HBase everywhere on your cluster.
Hadoop version mismatch issues have various manifestations but often all looks like its hung up.
====
[[hadoop2.hbase_0.94]]
==== Apache HBase 0.94 with Hadoop 2
To get 0.94.x to run on Hadoop 2.2.0, you need to change the hadoop 2 and protobuf versions in the _pom.xml_: Here is a diff with pom.xml changes:
[source]
----
$ svn diff pom.xml
Index: pom.xml
===================================================================
--- pom.xml (revision 1545157)
+++ pom.xml (working copy)
@@ -1034,7 +1034,7 @@
<slf4j.version>1.4.3</slf4j.version>
<log4j.version>1.2.16</log4j.version>
<mockito-all.version>1.8.5</mockito-all.version>
- <protobuf.version>2.4.0a</protobuf.version>
+ <protobuf.version>2.5.0</protobuf.version>
<stax-api.version>1.0.1</stax-api.version>
<thrift.version>0.8.0</thrift.version>
<zookeeper.version>3.4.5</zookeeper.version>
@@ -2241,7 +2241,7 @@
</property>
</activation>
<properties>
- <hadoop.version>2.0.0-alpha</hadoop.version>
+ <hadoop.version>2.2.0</hadoop.version>
<slf4j.version>1.6.1</slf4j.version>
</properties>
<dependencies>
----
The next step is to regenerate Protobuf files and assuming that the Protobuf has been installed:
* Go to the HBase root folder, using the command line;
* Type the following commands:
+
[source,bourne]
----
$ protoc -Isrc/main/protobuf --java_out=src/main/java src/main/protobuf/hbase.proto
----
+
[source,bourne]
----
$ protoc -Isrc/main/protobuf --java_out=src/main/java src/main/protobuf/ErrorHandling.proto
----
Building against the hadoop 2 profile by running something like the following command:
----
$ mvn clean install assembly:single -Dhadoop.profile=2.0 -DskipTests
----
[[hadoop.hbase_0.94]]
==== Apache HBase 0.92 and 0.94
HBase 0.92 and 0.94 versions can work with Hadoop versions, 0.20.205, 0.22.x, 1.0.x, and 1.1.x.
HBase-0.94 can additionally work with Hadoop-0.23.x and 2.x, but you may have to recompile the code using the specific maven profile (see top level pom.xml)
[[hadoop.hbase_0.96]]
==== Apache HBase 0.96
As of Apache HBase 0.96.x, Apache Hadoop 1.0.x at least is required.
Hadoop 2 is strongly encouraged (faster but also has fixes that help MTTR). We will no longer run properly on older Hadoops such as 0.20.205 or branch-0.20-append.
Do not move to Apache HBase 0.96.x if you cannot upgrade your Hadoop. See link:http://search-hadoop.com/m/7vFVx4EsUb2[HBase, mail # dev - DISCUSS:
Have hbase require at least hadoop 1.0.0 in hbase 0.96.0?]
[[hadoop.older.versions]]
==== Hadoop versions 0.20.x - 1.x
DO NOT use Hadoop versions older than 2.2.0 for HBase versions greater than 1.0. Check release documentation if you are using an older version of HBase for Hadoop related information.
[[hadoop.security]]
==== Apache HBase on Secure Hadoop
Apache HBase will run on any Hadoop 0.20.x that incorporates Hadoop security features as long as you do as suggested above and replace the Hadoop jar that ships with HBase with the secure version.
If you want to read more about how to setup Secure HBase, see <<hbase.secure.configuration,hbase.secure.configuration>>.
[[dfs.datanode.max.transfer.threads]]
==== `dfs.datanode.max.transfer.threads` (((dfs.datanode.max.transfer.threads)))
@ -402,8 +297,8 @@ See also <<casestudies.max.transfer.threads,casestudies.max.transfer.threads>> a
[[zookeeper.requirements]]
=== ZooKeeper Requirements
ZooKeeper 3.4.x is required as of HBase 1.0.0.
HBase makes use of the `multi` functionality that is only available since Zookeeper 3.4.0. The `hbase.zookeeper.useMulti` configuration property defaults to `true` in HBase 1.0.0.
ZooKeeper 3.4.x is required.
HBase makes use of the `multi` functionality that is only available since Zookeeper 3.4.0. The `hbase.zookeeper.useMulti` configuration property defaults to `true`.
Refer to link:https://issues.apache.org/jira/browse/HBASE-12241[HBASE-12241 (The crash of regionServer when taking deadserver's replication queue breaks replication)] and link:https://issues.apache.org/jira/browse/HBASE-6775[HBASE-6775 (Use ZK.multi when available for HBASE-6710 0.92/0.94 compatibility fix)] for background.
The property is deprecated and useMulti is always enabled in HBase 2.0.
@ -590,8 +485,6 @@ Check them out especially if HBase had trouble starting.
HBase also puts up a UI listing vital attributes.
By default it's deployed on the Master host at port 16010 (HBase RegionServers listen on port 16020 by default and put up an informational HTTP server at port 16030). If the Master is running on a host named `master.example.org` on the default port, point your browser at pass:[http://master.example.org:16010] to see the web interface.
Prior to HBase 0.98 the master UI was deployed on port 60010, and the HBase RegionServers UI on port 60030.
Once HBase has started, see the <<shell_exercises,shell exercises>> section for how to create tables, add data, scan your insertions, and finally disable and drop your tables.
To stop HBase after exiting the HBase shell enter
@ -774,7 +667,7 @@ example9
[[hbase_env]]
==== _hbase-env.sh_
The following lines in the _hbase-env.sh_ file show how to set the `JAVA_HOME` environment variable (required for HBase 0.98.5 and newer) and set the heap to 4 GB (rather than the default value of 1 GB). If you copy and paste this example, be sure to adjust the `JAVA_HOME` to suit your environment.
The following lines in the _hbase-env.sh_ file show how to set the `JAVA_HOME` environment variable (required for HBase) and set the heap to 4 GB (rather than the default value of 1 GB). If you copy and paste this example, be sure to adjust the `JAVA_HOME` to suit your environment.
----
# The java implementation to use.

View File

@ -675,15 +675,10 @@ public class SumEndPoint extends Sum.SumService implements Coprocessor, Coproces
[source, java]
----
Configuration conf = HBaseConfiguration.create();
// Use below code for HBase version 1.x.x or above.
Connection connection = ConnectionFactory.createConnection(conf);
TableName tableName = TableName.valueOf("users");
Table table = connection.getTable(tableName);
//Use below code HBase version 0.98.xx or below.
//HConnection connection = HConnectionManager.createConnection(conf);
//HTableInterface table = connection.getTable("users");
final Sum.SumRequest request = Sum.SumRequest.newBuilder().setFamily("salaryDet").setColumn("gross").build();
try {
Map<byte[], Long> results = table.coprocessorService(
@ -760,15 +755,10 @@ Then you can read the configuration using code like the following:
[source,java]
----
Configuration conf = HBaseConfiguration.create();
// Use below code for HBase version 1.x.x or above.
Connection connection = ConnectionFactory.createConnection(conf);
TableName tableName = TableName.valueOf("users");
Table table = connection.getTable(tableName);
//Use below code HBase version 0.98.xx or below.
//HConnection connection = HConnectionManager.createConnection(conf);
//HTableInterface table = connection.getTable("users");
Get get = new Get(Bytes.toBytes("admin"));
Result result = table.get(get);
for (Cell c : result.rawCells()) {

View File

@ -306,37 +306,26 @@ See the <<hbase.unittests.cmds,hbase.unittests.cmds>> section in <<hbase.unittes
[[maven.build.hadoop]]
==== Building against various hadoop versions.
As of 0.96, Apache HBase supports building against Apache Hadoop versions: 1.0.3, 2.0.0-alpha and 3.0.0-SNAPSHOT.
By default, in 0.96 and earlier, we will build with Hadoop-1.0.x.
As of 0.98, Hadoop 1.x is deprecated and Hadoop 2.x is the default.
To change the version to build against, add a hadoop.profile property when you invoke +mvn+:
HBase supports building against Apache Hadoop versions: 2.y and 3.y (early release artifacts). By default we build against Hadoop 2.x.
To build against a specific release from the Hadoop 2.y line, set e.g. `-Dhadoop-two.version=2.6.3`.
[source,bourne]
----
mvn -Dhadoop.profile=1.0 ...
mvn -Dhadoop-two.version=2.6.3 ...
----
The above will build against whatever explicit hadoop 1.x version we have in our _pom.xml_ as our '1.0' version.
To change the major release line of Hadoop we build against, add a hadoop.profile property when you invoke +mvn+:
[source,bourne]
----
mvn -Dhadoop.profile=3.0 ...
----
The above will build against whatever explicit hadoop 3.y version we have in our _pom.xml_ as our '3.0' version.
Tests may not all pass so you may need to pass `-DskipTests` unless you are inclined to fix the failing tests.
.'dependencyManagement.dependencies.dependency.artifactId' fororg.apache.hbase:${compat.module}:test-jar with value '${compat.module}'does not match a valid id pattern
[NOTE]
====
You will see ERRORs like the above title if you pass the _default_ profile; e.g.
if you pass +hadoop.profile=1.1+ when building 0.96 or +hadoop.profile=2.0+ when building hadoop 0.98; just drop the hadoop.profile stipulation in this case to get your build to run again.
This seems to be a maven peculiarity that is probably fixable but we've not spent the time trying to figure it.
====
Similarly, for 3.0, you would just replace the profile value.
Note that Hadoop-3.0.0-SNAPSHOT does not currently have a deployed maven artifact - you will need to build and install your own in your local maven repository if you want to run against this profile.
In earlier versions of Apache HBase, you can build against older versions of Apache Hadoop, notably, Hadoop 0.22.x and 0.23.x.
If you are running, for example HBase-0.94 and wanted to build against Hadoop 0.23.x, you would run with:
[source,bourne]
----
mvn -Dhadoop.profile=22 ...
----
To pick a particular Hadoop 3.y release, you'd set e.g. `-Dhadoop-three.version=3.0.0-alpha1`.
[[build.protobuf]]
==== Build Protobuf
@ -426,27 +415,6 @@ HBase 1.x requires Java 7 to build.
See <<java,java>> for Java requirements per HBase release.
====
=== Building against HBase 0.96-0.98
HBase 0.96.x will run on Hadoop 1.x or Hadoop 2.x.
HBase 0.98 still runs on both, but HBase 0.98 deprecates use of Hadoop 1.
HBase 1.x will _not_ run on Hadoop 1.
In the following procedures, we make a distinction between HBase 1.x builds and the awkward process involved building HBase 0.96/0.98 for either Hadoop 1 or Hadoop 2 targets.
You must choose which Hadoop to build against.
It is not possible to build a single HBase binary that runs against both Hadoop 1 and Hadoop 2.
Hadoop is included in the build, because it is needed to run HBase in standalone mode.
Therefore, the set of modules included in the tarball changes, depending on the build target.
To determine which HBase you have, look at the HBase version.
The Hadoop version is embedded within it.
Maven, our build system, natively does not allow a single product to be built against different dependencies.
Also, Maven cannot change the set of included modules and write out the correct _pom.xml_ files with appropriate dependencies, even using two build targets, one for Hadoop 1 and another for Hadoop 2.
A prerequisite step is required, which takes as input the current _pom.xml_s and generates Hadoop 1 or Hadoop 2 versions using a script in the _dev-tools/_ directory, called _generate-hadoopX-poms.sh_ where [replaceable]_X_ is either `1` or `2`.
You then reference these generated poms when you build.
For now, just be aware of the difference between HBase 1.x builds and those of HBase 0.96-0.98.
This difference is important to the build instructions.
[[maven.settings.xml]]
.Example _~/.m2/settings.xml_ File
====
@ -496,9 +464,7 @@ For the build to sign them for you, you a properly configured _settings.xml_ in
[[maven.release]]
=== Making a Release Candidate
NOTE: These instructions are for building HBase 1.0.x.
For building earlier versions, e.g. 0.98.x, the process is different.
See this section under the respective release documentation folders.
NOTE: These instructions are for building HBase 1.y.z
.Point Releases
If you are making a point release (for example to quickly address a critical incompatibility or security problem) off of a release branch instead of a development branch, the tagging instructions are slightly different.
@ -1440,8 +1406,6 @@ The following interface classifications are commonly used:
`@InterfaceAudience.LimitedPrivate(HBaseInterfaceAudience.COPROC)`::
APIs for HBase coprocessor writers.
As of HBase 0.92/0.94/0.96/0.98 this api is still unstable.
No guarantees on compatibility with future versions.
No `@InterfaceAudience` Classification::
Packages without an `@InterfaceAudience` label are considered private.

View File

@ -145,6 +145,9 @@ HBase Private API::
[[hbase.versioning.pre10]]
=== Pre 1.0 versions
.HBase Pre-1.0 versions are all EOM
NOTE: For new installations, do not deploy 0.94.y, 0.96.y, or 0.98.y. Deploy our stable version. See link:https://issues.apache.org/jira/browse/HBASE-11642[EOL 0.96], link:https://issues.apache.org/jira/browse/HBASE-16215[clean up of EOM releases], and link:http://www.apache.org/dist/hbase/[the header of our downloads].
Before the semantic versioning scheme pre-1.0, HBase tracked either Hadoop's versions (0.2x) or 0.9x versions. If you are into the arcane, checkout our old wiki page on link:http://wiki.apache.org/hadoop/Hbase/HBaseVersions[HBase Versioning] which tries to connect the HBase version dots. Below sections cover ONLY the releases before 1.0.
[[hbase.development.series]]
@ -260,9 +263,6 @@ A rolling upgrade from 0.94.x directly to 0.98.x does not work. The upgrade path
==== The "Singularity"
.HBase 0.96.x was EOL'd, September 1st, 2014
NOTE: Do not deploy 0.96.x Deploy at least 0.98.x. See link:https://issues.apache.org/jira/browse/HBASE-11642[EOL 0.96].
You will have to stop your old 0.94.x cluster completely to upgrade. If you are replicating between clusters, both clusters will have to go down to upgrade. Make sure it is a clean shutdown. The less WAL files around, the faster the upgrade will run (the upgrade will split any log files it finds in the filesystem as part of the upgrade process). All clients must be upgraded to 0.96 too.
The API has changed. You will need to recompile your code against 0.96 and you may need to adjust applications to go against new APIs (TODO: List of changes).

View File

@ -121,11 +121,6 @@
<item name="X-Ref" href="1.1/xref/index.html" target="_blank" />
<item name="Ref Guide (single-page)" href="1.1/book.html" target="_blank" />
</item>
<item name="0.94 Documentation">
<item name="API" href="0.94/apidocs/index.html" target="_blank" />
<item name="X-Ref" href="0.94/xref/index.html" target="_blank" />
<item name="Ref Guide (single-page)" href="0.94/book.html" target="_blank" />
</item>
</menu>
<menu name="ASF">
<item name="Apache Software Foundation" href="http://www.apache.org/foundation/" target="_blank" />