HBASE-20260 Remove old content from book

This commit is contained in:
Mike Drob 2018-03-22 15:05:30 -05:00
parent b5881dbd3f
commit 7b4b0dfadf
8 changed files with 55 additions and 490 deletions

View File

@ -1509,11 +1509,14 @@ Alphanumeric Rowkeys::
Using a Custom Algorithm::
The RegionSplitter tool is provided with HBase, and uses a _SplitAlgorithm_ to determine split points for you.
As parameters, you give it the algorithm, desired number of regions, and column families.
It includes two split algorithms.
It includes three split algorithms.
The first is the
`link:https://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/util/RegionSplitter.HexStringSplit.html[HexStringSplit]`
algorithm, which assumes the row keys are hexadecimal strings.
The second,
The second is the
`link:https://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/util/RegionSplitter.DecimalStringSplit.html[DecimalStringSplit]`
algorithm, which assumes the row keys are decimal strings in the range 00000000 to 99999999.
The third,
`link:https://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/util/RegionSplitter.UniformSplit.html[UniformSplit]`,
assumes the row keys are random byte arrays.
You will probably need to develop your own

View File

@ -19,7 +19,7 @@
*/
////
[[casestudies]]
[[backuprestore]]
= Backup and Restore
:doctype: book
:numbered:

View File

@ -441,7 +441,7 @@ $ hbase org.apache.hadoop.hbase.util.LoadTestTool -write 1:10:100 -num_keys 1000
====
[[data.block.encoding.enable]]
== Enable Data Block Encoding
=== Enable Data Block Encoding
Codecs are built into HBase so no extra configuration is needed.
Codecs are enabled on a table by setting the `DATA_BLOCK_ENCODING` property.

View File

@ -113,7 +113,7 @@ This section lists required services and some required system configuration.
|===
NOTE: HBase will neither build nor compile with Java 6.
NOTE: HBase will neither build nor run with Java 6.
NOTE: You must set `JAVA_HOME` on each node of your cluster. _hbase-env.sh_ provides a handy mechanism to do this.
@ -123,11 +123,7 @@ ssh::
HBase uses the Secure Shell (ssh) command and utilities extensively to communicate between cluster nodes. Each server in the cluster must be running `ssh` so that the Hadoop and HBase daemons can be managed. You must be able to connect to all nodes via SSH, including the local node, from the Master as well as any backup Master, using a shared key rather than a password. You can see the basic methodology for such a set-up in Linux or Unix systems at "<<passwordless.ssh.quickstart>>". If your cluster nodes use OS X, see the section, link:https://wiki.apache.org/hadoop/Running_Hadoop_On_OS_X_10.5_64-bit_%28Single-Node_Cluster%29[SSH: Setting up Remote Desktop and Enabling Self-Login] on the Hadoop wiki.
DNS::
HBase uses the local hostname to self-report its IP address. Both forward and reverse DNS resolving must work in versions of HBase previous to 0.92.0. The link:https://github.com/sujee/hadoop-dns-checker[hadoop-dns-checker] tool can be used to verify DNS is working correctly on the cluster. The project `README` file provides detailed instructions on usage.
Loopback IP::
Prior to hbase-0.96.0, HBase only used the IP address `127.0.0.1` to refer to `localhost`, and this was not configurable.
See <<loopback.ip,Loopback IP>> for more details.
HBase uses the local hostname to self-report its IP address.
NTP::
The clocks on cluster nodes should be synchronized. A small amount of variation is acceptable, but larger amounts of skew can cause erratic and unexpected behavior. Time synchronization is one of the first things to check if you see unexplained problems in your cluster. It is recommended that you run a Network Time Protocol (NTP) service, or another time-synchronization mechanism on your cluster and that all nodes look to the same service for time synchronization. See the link:http://www.tldp.org/LDP/sag/html/basic-ntp-config.html[Basic NTP Configuration] at [citetitle]_The Linux Documentation Project (TLDP)_ to set up NTP.
@ -171,14 +167,14 @@ Linux Shell::
All of the shell scripts that come with HBase rely on the link:http://www.gnu.org/software/bash[GNU Bash] shell.
Windows::
Prior to HBase 0.96, running HBase on Microsoft Windows was limited only for testing purposes.
Running production systems on Windows machines is not recommended.
[[hadoop]]
=== link:https://hadoop.apache.org[Hadoop](((Hadoop)))
The following table summarizes the versions of Hadoop supported with each version of HBase.
The following table summarizes the versions of Hadoop supported with each version of HBase. Older versions not appearing in this table are considered unsupported and likely missing necessary features, while newer versions are untested but may be suitable.
Based on the version of HBase, you should select the most appropriate version of Hadoop.
You can use Apache Hadoop, or a vendor's distribution of Hadoop.
No distinction is made here.
@ -205,10 +201,6 @@ Use the following legend to interpret this table:
[cols="1,1,1,1", options="header"]
|===
| | HBase-1.2.x | HBase-1.3.x | HBase-2.0.x
|Hadoop-2.0.x-alpha | X | X | X
|Hadoop-2.1.0-beta | X | X | X
|Hadoop-2.2.0 | X | X | X
|Hadoop-2.3.x | X | X | X
|Hadoop-2.4.x | S | S | X
|Hadoop-2.5.x | S | S | X
|Hadoop-2.6.0 | X | X | X
@ -294,9 +286,6 @@ See also <<casestudies.max.transfer.threads,casestudies.max.transfer.threads>> a
=== ZooKeeper Requirements
ZooKeeper 3.4.x is required.
HBase makes use of the `multi` functionality that is only available since Zookeeper 3.4.0. The `hbase.zookeeper.useMulti` configuration property defaults to `true`.
Refer to link:https://issues.apache.org/jira/browse/HBASE-12241[HBASE-12241 (The crash of regionServer when taking deadserver's replication queue breaks replication)] and link:https://issues.apache.org/jira/browse/HBASE-6775[HBASE-6775 (Use ZK.multi when available for HBASE-6710 0.92/0.94 compatibility fix)] for background.
The property is deprecated and useMulti is always enabled in HBase 2.0.
[[standalone_dist]]
== HBase run modes: Standalone and Distributed
@ -543,14 +532,14 @@ Usually this ensemble location is kept out in the _hbase-site.xml_ and is picked
If you are configuring an IDE to run an HBase client, you should include the _conf/_ directory on your classpath so _hbase-site.xml_ settings can be found (or add _src/test/resources_ to pick up the hbase-site.xml used by tests).
Minimally, an HBase client needs hbase-client module in its dependencies when connecting to a cluster:
For Java applications using Maven, including the hbase-shaded-client module is the recommended dependency when connecting to a cluster:
[source,xml]
----
<dependency>
<groupId>org.apache.hbase</groupId>
<artifactId>hbase-client</artifactId>
<version>1.2.4</version>
<artifactId>hbase-shaded-client</artifactId>
<version>2.0.0</version>
</dependency>
----
@ -785,7 +774,6 @@ For most usage patterns, you should use automatic splitting.
See <<manual_region_splitting_decisions,manual region splitting decisions>> for more information about manual region splitting.
Instead of allowing HBase to split your regions automatically, you can choose to manage the splitting yourself.
This feature was added in HBase 0.90.0.
Manually managing splits works if you know your keyspace well, otherwise let HBase figure where to split for you.
Manual splitting can mitigate region creation and movement under load.
It also makes it so region boundaries are known and invariant (if you disable region splitting). If you use manual splits, it is easier doing staggered, time-based major compactions to spread out your network IO load.
@ -811,13 +799,12 @@ Otherwise, the cluster can be prone to compaction storms with a large number of
It is important to understand that the data growth causes compaction storms and not the manual split decision.
If the regions are split into too many large regions, you can increase the major compaction interval by configuring `HConstants.MAJOR_COMPACTION_PERIOD`.
HBase 0.90 introduced `org.apache.hadoop.hbase.util.RegionSplitter`, which provides a network-IO-safe rolling split of all regions.
The `org.apache.hadoop.hbase.util.RegionSplitter` utility also provides a network-IO-safe rolling split of all regions.
[[managed.compactions]]
==== Managed Compactions
By default, major compactions are scheduled to run once in a 7-day period.
Prior to HBase 0.96.x, major compactions were scheduled to happen once per day by default.
If you need to control exactly when and how often major compaction runs, you can disable managed major compactions.
See the entry for `hbase.hregion.majorcompaction` in the <<compaction.parameters,compaction.parameters>> table for details.
@ -937,8 +924,8 @@ To enable monitoring and management from remote systems, you need to set system
See the link:http://docs.oracle.com/javase/8/docs/technotes/guides/management/agent.html[official documentation] for more information.
Historically, besides above port mentioned, JMX opens two additional random TCP listening ports, which could lead to port conflict problem. (See link:https://issues.apache.org/jira/browse/HBASE-10289[HBASE-10289] for details)
As an alternative, You can use the coprocessor-based JMX implementation provided by HBase.
To enable it in 0.99 or above, add below property in _hbase-site.xml_:
As an alternative, you can use the coprocessor-based JMX implementation provided by HBase.
To enable it, add below property in _hbase-site.xml_:
[source,xml]
----
@ -1033,8 +1020,8 @@ The corresponding properties for port configuration are `master.rmi.registry.por
[[dyn_config]]
== Dynamic Configuration
Since HBase 1.0.0, it is possible to change a subset of the configuration without requiring a server restart.
In the HBase shell, there are new operators, `update_config` and `update_all_config` that will prompt a server or all servers to reload configuration.
It is possible to change a subset of the configuration without requiring a server restart.
In the HBase shell, the operations `update_config` and `update_all_config` will prompt a server or all servers to reload configuration.
Only a subset of all configurations can currently be changed in the running server.
Here are those configurations:

View File

@ -44,24 +44,6 @@ table, enable or disable the table, and start and stop HBase.
Apart from downloading HBase, this procedure should take less than 10 minutes.
[[loopback.ip]]
[NOTE]
====
.Loopback IP - HBase 0.94.x and earlier
Prior to HBase 0.94.x, HBase expected the loopback IP address to be 127.0.0.1.
Ubuntu and some other distributions default to 127.0.1.1 and this will cause
problems for you. See link:https://web-beta.archive.org/web/20140104070155/http://blog.devving.com/why-does-hbase-care-about-etchosts[Why does HBase care about /etc/hosts?] for detail
The following _/etc/hosts_ file works correctly for HBase 0.94.x and earlier, on Ubuntu. Use this as a template if you run into trouble.
[listing]
----
127.0.0.1 localhost
127.0.0.1 ubuntu.ubuntu-domain ubuntu
----
This issue has been fixed in hbase-0.96.0 and beyond.
====
=== JDK Version Requirements
HBase requires that a JDK be installed.

View File

@ -46,15 +46,16 @@ Usage: hbase [<options>] <command> [<args>]
Options:
--config DIR Configuration direction to use. Default: ./conf
--hosts HOSTS Override the list in 'regionservers' file
--auth-as-server Authenticate to ZooKeeper using servers configuration
Commands:
Some commands take arguments. Pass no args or -h for usage.
shell Run the HBase shell
hbck Run the hbase 'fsck' tool
snapshot Tool for managing snapshots
wal Write-ahead-log analyzer
hfile Store file analyzer
zkcli Run the ZooKeeper shell
upgrade Upgrade hbase
master Run an HBase HMaster node
regionserver Run an HBase HRegionServer node
zookeeper Run a ZooKeeper server
@ -66,6 +67,7 @@ Some commands take arguments. Pass no args or -h for usage.
mapredcp Dump CLASSPATH entries required by mapreduce
pe Run PerformanceEvaluation
ltt Run LoadTestTool
canary Run the Canary tool
version Print the version
CLASSNAME Run the class named CLASSNAME
----
@ -81,20 +83,27 @@ To see the usage, use the `--help` parameter.
----
$ ${HBASE_HOME}/bin/hbase canary -help
Usage: bin/hbase org.apache.hadoop.hbase.tool.Canary [opts] [table1 [table2]...] | [regionserver1 [regionserver2]..]
Usage: hbase org.apache.hadoop.hbase.tool.Canary [opts] [table1 [table2]...] | [regionserver1 [regionserver2]..]
where [opts] are:
-help Show this help and exit.
-regionserver replace the table argument to regionserver,
which means to enable regionserver mode
-allRegions Tries all regions on a regionserver,
only works in regionserver mode.
-zookeeper Tries to grab zookeeper.znode.parent
on each zookeeper instance
-daemon Continuous check at defined intervals.
-interval <N> Interval between checks (sec)
-e Use region/regionserver as regular expression
which means the region/regionserver is regular expression pattern
-e Use table/regionserver as regular expression
which means the table/regionserver is regular expression pattern
-f <B> stop whole program if first error occurs, default is true
-t <N> timeout for a check, default is 600000 (milliseconds)
-t <N> timeout for a check, default is 600000 (millisecs)
-writeTableTimeout <N> write timeout for the writeTable, default is 600000 (millisecs)
-readTableTimeouts <tableName>=<read timeout>,<tableName>=<read timeout>, ... comma-separated list of read timeouts per table (no spaces), default is 600000 (millisecs)
-writeSniffing enable the write sniffing in canary
-treatFailureAsError treats read / write failure as error
-writeTable The table used for write sniffing. Default is hbase:canary
-Dhbase.canary.read.raw.enabled=<true/false> Use this flag to enable or disable raw scan during read canary test Default is false and raw is not enabled during scan
-D<configProperty>=<value> assigning or override the configuration params
----
@ -107,6 +116,7 @@ private static final int USAGE_EXIT_CODE = 1;
private static final int INIT_ERROR_EXIT_CODE = 2;
private static final int TIMEOUT_ERROR_EXIT_CODE = 3;
private static final int ERROR_EXIT_CODE = 4;
private static final int FAILURE_EXIT_CODE = 5;
----
Here are some examples based on the following given case.
@ -737,27 +747,20 @@ Options:
=== `hbase pe`
The `hbase pe` command is a shortcut provided to run the `org.apache.hadoop.hbase.PerformanceEvaluation` tool, which is used for testing.
The `hbase pe` command was introduced in HBase 0.98.4.
The `hbase pe` command runs the PerformanceEvaluation tool, which is used for testing.
The PerformanceEvaluation tool accepts many different options and commands.
For usage instructions, run the command with no options.
To run PerformanceEvaluation prior to HBase 0.98.4, issue the command `hbase org.apache.hadoop.hbase.PerformanceEvaluation`.
The PerformanceEvaluation tool has received many updates in recent HBase releases, including support for namespaces, support for tags, cell-level ACLs and visibility labels, multiget support for RPC calls, increased sampling sizes, an option to randomly sleep during testing, and ability to "warm up" the cluster before testing starts.
=== `hbase ltt`
The `hbase ltt` command is a shortcut provided to run the `org.apache.hadoop.hbase.util.LoadTestTool` utility, which is used for testing.
The `hbase ltt` command was introduced in HBase 0.98.4.
The `hbase ltt` command runs the LoadTestTool utility, which is used for testing.
You must specify either `-write` or `-update-read` as the first option.
You must specify one of `-write`, `-update`, or `-read` as the first option.
For general usage instructions, pass the `-h` option.
To run LoadTestTool prior to HBase 0.98.4, issue the command +hbase
org.apache.hadoop.hbase.util.LoadTestTool+.
The LoadTestTool has received many updates in recent HBase releases, including support for namespaces, support for tags, cell-level ACLS and visibility labels, testing security-related features, ability to specify the number of regions per server, tests for multi-get RPC calls, and tests relating to replication.
[[ops.regionmgt]]

View File

@ -101,6 +101,11 @@ To disable, set the logging level back to `INFO` level.
[[trouble.log.gc]]
=== JVM Garbage Collection Logs
[NOTE]
----
All example Garbage Collection logs in this section are based on Java 8 output. The introduction of Unified Logging in Java 9 and newer will result in very different looking logs.
----
HBase is memory intensive, and using the default GC you can see long pauses in all threads including the _Juliet Pause_ aka "GC of Death". To help debug this or confirm this is happening GC logging can be turned on in the Java virtual machine.
To enable, in _hbase-env.sh_, uncomment one of the below lines :
@ -258,7 +263,6 @@ link:https://issues.apache.org/jira/browse/HBASE[JIRA] is also really helpful wh
==== Master Web Interface
The Master starts a web-interface on port 16010 by default.
(Up to and including 0.98 this was port 60010)
The Master web UI lists created tables and their definition (e.g., ColumnFamilies, blocksize, etc.). Additionally, the available RegionServers in the cluster are listed along with selected high-level metrics (requests, number of regions, usedHeap, maxHeap). The Master web UI allows navigation to each RegionServer's web UI.
@ -266,7 +270,6 @@ The Master web UI lists created tables and their definition (e.g., ColumnFamilie
==== RegionServer Web Interface
RegionServers starts a web-interface on port 16030 by default.
(Up to an including 0.98 this was port 60030)
The RegionServer web UI lists online regions and their start/end keys, as well as point-in-time RegionServer metrics (requests, regions, storeFileIndexSize, compactionQueueSize, etc.).
@ -564,14 +567,6 @@ You can also tail all the logs at the same time, edit files, etc.
For more information on the HBase client, see <<architecture.client,client>>.
=== Missed Scan Results Due To Mismatch Of `hbase.client.scanner.max.result.size` Between Client and Server
If either the client or server version is lower than 0.98.11/1.0.0 and the server
has a smaller value for `hbase.client.scanner.max.result.size` than the client, scan
requests that reach the server's `hbase.client.scanner.max.result.size` are likely
to miss data. In particular, 0.98.11 defaults `hbase.client.scanner.max.result.size`
to 2 MB but other versions default to larger values. For this reason, be very careful
using 0.98.11 servers with any other client version.
[[trouble.client.scantimeout]]
=== ScannerTimeoutException or UnknownScannerException
@ -683,12 +678,6 @@ A workaround is passing your client-side JVM a reasonable value for `-XX:MaxDire
By default, the `MaxDirectMemorySize` is equal to your `-Xmx` max heapsize setting (if `-Xmx` is set). Try setting it to something smaller (for example, one user had success setting it to `1g` when they had a client-side heap of `12g`). If you set it too small, it will bring on `FullGCs` so keep it a bit hefty.
You want to make this setting client-side only especially if you are running the new experimental server-side off-heap cache since this feature depends on being able to use big direct buffers (You may have to keep separate client-side and server-side config dirs).
[[trouble.client.slowdown.admin]]
=== Client Slowdown When Calling Admin Methods (flush, compact, etc.)
This is a client issue fixed by link:https://issues.apache.org/jira/browse/HBASE-5073[HBASE-5073] in 0.90.6.
There was a ZooKeeper leak in the client and the client was getting pummeled by ZooKeeper events with each additional invocation of the admin API.
[[trouble.client.security.rpc]]
=== Secure Client Cannot Connect ([Caused by GSSException: No valid credentials provided(Mechanism level: Failed to find any Kerberos tgt)])
@ -892,7 +881,6 @@ See <<managed.compactions>> for more information on managing compactions.
=== Loopback IP
HBase expects the loopback IP Address to be 127.0.0.1.
See the Getting Started section on <<loopback.ip>>.
[[trouble.network.ints]]
=== Network Interfaces
@ -1076,13 +1064,6 @@ This exception is returned back to the client and then the client goes back to `
However, if the NotServingRegionException is logged ERROR, then the client ran out of retries and something probably wrong.
[[trouble.rs.runtime.double_listed_regions]]
==== Regions listed by domain name, then IP
Fix your DNS.
In versions of Apache HBase before 0.92.x, reverse DNS needs to give same answer as forward lookup.
See link:https://issues.apache.org/jira/browse/HBASE-3431[HBASE 3431 RegionServer is not using the name given it by the master; double entry in master listing of servers] for gory details.
[[brand.new.compressor]]
==== Logs flooded with '2011-01-10 12:40:48,407 INFO org.apache.hadoop.io.compress.CodecPool: Gotbrand-new compressor' messages
@ -1216,31 +1197,6 @@ See Andrew's answer here, up on the user list: link:http://search-hadoop.com/m/s
[[trouble.versions]]
== HBase and Hadoop version issues
[[trouble.versions.205]]
=== `NoClassDefFoundError` when trying to run 0.90.x on hadoop-0.20.205.x (or hadoop-1.0.x)
Apache HBase 0.90.x does not ship with hadoop-0.20.205.x, etc.
To make it run, you need to replace the hadoop jars that Apache HBase shipped with in its _lib_ directory with those of the Hadoop you want to run HBase on.
If even after replacing Hadoop jars you get the below exception:
[source]
----
sv4r6s38: Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/commons/configuration/Configuration
sv4r6s38: at org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.<init>(DefaultMetricsSystem.java:37)
sv4r6s38: at org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.<clinit>(DefaultMetricsSystem.java:34)
sv4r6s38: at org.apache.hadoop.security.UgiInstrumentation.create(UgiInstrumentation.java:51)
sv4r6s38: at org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:209)
sv4r6s38: at org.apache.hadoop.security.UserGroupInformation.ensureInitialized(UserGroupInformation.java:177)
sv4r6s38: at org.apache.hadoop.security.UserGroupInformation.isSecurityEnabled(UserGroupInformation.java:229)
sv4r6s38: at org.apache.hadoop.security.KerberosName.<clinit>(KerberosName.java:83)
sv4r6s38: at org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:202)
sv4r6s38: at org.apache.hadoop.security.UserGroupInformation.ensureInitialized(UserGroupInformation.java:177)
----
you need to copy under _hbase/lib_, the _commons-configuration-X.jar_ you find in your Hadoop's _lib_ directory.
That should fix the above complaint.
[[trouble.wrong.version]]
=== ...cannot communicate with client version...
@ -1249,67 +1205,6 @@ If you see something like the following in your logs [computeroutput]+... 2012-0
shutdown. org.apache.hadoop.ipc.RemoteException: Server IPC version 7 cannot communicate
with client version 4 ...+ ...are you trying to talk to an Hadoop 2.0.x from an HBase that has an Hadoop 1.0.x client? Use the HBase built against Hadoop 2.0 or rebuild your HBase passing the +-Dhadoop.profile=2.0+ attribute to Maven (See <<maven.build.hadoop>> for more).
== IPC Configuration Conflicts with Hadoop
If the Hadoop configuration is loaded after the HBase configuration, and you have configured custom IPC settings in both HBase and Hadoop, the Hadoop values may overwrite the HBase values.
There is normally no need to change these settings for HBase, so this problem is an edge case.
However, link:https://issues.apache.org/jira/browse/HBASE-11492[HBASE-11492] renames these settings for HBase to remove the chance of a conflict.
Each of the setting names have been prefixed with `hbase.`, as shown in the following table.
No action is required related to these changes unless you are already experiencing a conflict.
These changes were backported to HBase 0.98.x and apply to all newer versions.
[cols="1,1", options="header"]
|===
| Pre-0.98.x
| 0.98-x And Newer
| ipc.server.listen.queue.size
| hbase.ipc.server.listen.queue.size
| ipc.server.max.callqueue.size
| hbase.ipc.server.max.callqueue.size
| ipc.server.callqueue.handler.factor
| hbase.ipc.server.callqueue.handler.factor
| ipc.server.callqueue.read.share
| hbase.ipc.server.callqueue.read.share
| ipc.server.callqueue.type
| hbase.ipc.server.callqueue.type
| ipc.server.queue.max.call.delay
| hbase.ipc.server.queue.max.call.delay
| ipc.server.max.callqueue.length
| hbase.ipc.server.max.callqueue.length
| ipc.server.read.threadpool.size
| hbase.ipc.server.read.threadpool.size
| ipc.server.tcpkeepalive
| hbase.ipc.server.tcpkeepalive
| ipc.server.tcpnodelay
| hbase.ipc.server.tcpnodelay
| ipc.client.call.purge.timeout
| hbase.ipc.client.call.purge.timeout
| ipc.client.connection.maxidletime
| hbase.ipc.client.connection.maxidletime
| ipc.client.idlethreshold
| hbase.ipc.client.idlethreshold
| ipc.client.kill.max
| hbase.ipc.client.kill.max
| ipc.server.scan.vtime.weight
| hbase.ipc.server.scan.vtime.weight
|===
== HBase and HDFS
General configuration guidance for Apache HDFS is out of the scope of this guide.

View File

@ -27,19 +27,15 @@
:icons: font
:experimental:
You cannot skip major versions when upgrading. If you are upgrading from version 0.90.x to 0.94.x, you must first go from 0.90.x to 0.92.x and then go from 0.92.x to 0.94.x.
NOTE: It may be possible to skip across versions -- for example go from 0.92.2 straight to 0.98.0 just following the 0.96.x upgrade instructions -- but these scenarios are untested.
You cannot skip major versions when upgrading. If you are upgrading from version 0.98.x to 2.x, you must first go from 0.98.x to 1.2.x and then go from 1.2.x to 2.x.
Review <<configuration>>, in particular <<hadoop>>. Familiarize yourself with <<hbase_supported_tested_definitions>>.
[[hbase.versioning]]
== HBase version number and compatibility
HBase has two versioning schemes, pre-1.0 and post-1.0. Both are detailed below.
[[hbase.versioning.post10]]
=== Post 1.0 versions
=== Aspirational Semantic Versioning
Starting with the 1.0.0 release, HBase is working towards link:http://semver.org/[Semantic Versioning] for its release versioning. In summary:
@ -155,23 +151,9 @@ HBase LimitedPrivate API::
HBase Private API::
All classes annotated with InterfaceAudience.Private or all classes that do not have the annotation are for HBase internal use only. The interfaces and method signatures can change at any point in time. If you are relying on a particular interface that is marked Private, you should open a jira to propose changing the interface to be Public or LimitedPrivate, or an interface exposed for this purpose.
[[hbase.versioning.pre10]]
=== Pre 1.0 versions
.HBase Pre-1.0 versions are all EOM
NOTE: For new installations, do not deploy 0.94.y, 0.96.y, or 0.98.y. Deploy our stable version. See link:https://issues.apache.org/jira/browse/HBASE-11642[EOL 0.96], link:https://issues.apache.org/jira/browse/HBASE-16215[clean up of EOM releases], and link:https://www.apache.org/dist/hbase/[the header of our downloads].
Before the semantic versioning scheme pre-1.0, HBase tracked either Hadoop's versions (0.2x) or 0.9x versions. If you are into the arcane, checkout our old wiki page on link:https://web.archive.org/web/20150905071342/https://wiki.apache.org/hadoop/Hbase/HBaseVersions[HBase Versioning] which tries to connect the HBase version dots. Below sections cover ONLY the releases before 1.0.
[[hbase.development.series]]
.Odd/Even Versioning or "Development" Series Releases
Ahead of big releases, we have been putting up preview versions to start the feedback cycle turning-over earlier. These "Development" Series releases, always odd-numbered, come with no guarantees, not even regards being able to upgrade between two sequential releases (we reserve the right to break compatibility across "Development" Series releases). Needless to say, these releases are not for production deploys. They are a preview of what is coming in the hope that interested parties will take the release for a test drive and flag us early if we there are issues we've missed ahead of our rolling a production-worthy release.
Our first "Development" Series was the 0.89 set that came out ahead of HBase 0.90.0. HBase 0.95 is another "Development" Series that portends HBase 0.96.0. 0.99.x is the last series in "developer preview" mode before 1.0. Afterwards, we will be using semantic versioning naming scheme (see above).
[[hbase.binary.compatibility]]
.Binary Compatibility
When we say two HBase versions are compatible, we mean that the versions are wire and binary compatible. Compatible HBase versions means that clients can talk to compatible but differently versioned servers. It means too that you can just swap out the jars of one version and replace them with the jars of another, compatible version and all will just work. Unless otherwise specified, HBase point versions are (mostly) binary compatible. You can safely do rolling upgrades between binary compatible versions; i.e. across point versions: e.g. from 0.94.5 to 0.94.6. See link:[Does compatibility between versions also mean binary compatibility?] discussion on the HBase dev mailing list.
When we say two HBase versions are compatible, we mean that the versions are wire and binary compatible. Compatible HBase versions means that clients can talk to compatible but differently versioned servers. It means too that you can just swap out the jars of one version and replace them with the jars of another, compatible version and all will just work. Unless otherwise specified, HBase point versions are (mostly) binary compatible. You can safely do rolling upgrades between binary compatible versions; i.e. across maintenance releases: e.g. from 1.2.4 to 1.2.6. See link:[Does compatibility between versions also mean binary compatibility?] discussion on the HBase dev mailing list.
[[hbase.rolling.upgrade]]
=== Rolling Upgrades
@ -189,9 +171,9 @@ The rolling-restart script will first gracefully stop and restart the master, an
[[hbase.rolling.restart]]
.Rolling Upgrade Between Versions that are Binary/Wire Compatible
Unless otherwise specified, HBase point versions are binary compatible. You can do a <<hbase.rolling.upgrade>> between HBase point versions. For example, you can go to 0.94.6 from 0.94.5 by doing a rolling upgrade across the cluster replacing the 0.94.5 binary with a 0.94.6 binary.
Unless otherwise specified, HBase minor versions are binary compatible. You can do a <<hbase.rolling.upgrade>> between HBase point versions. For example, you can go to 1.2.4 from 1.2.6 by doing a rolling upgrade across the cluster replacing the 1.2.4 binary with a 1.2.6 binary.
In the minor version-particular sections below, we call out where the versions are wire/protocol compatible and in this case, it is also possible to do a <<hbase.rolling.upgrade>>. For example, in <<upgrade1.0.rolling.upgrade>>, we state that it is possible to do a rolling upgrade between hbase-0.98.x and hbase-1.0.0.
In the minor version-particular sections below, we call out where the versions are wire/protocol compatible and in this case, it is also possible to do a <<hbase.rolling.upgrade>>.
== Rollback
@ -325,298 +307,11 @@ Quitting...
== Upgrade Paths
[[upgrade1.0]]
=== Upgrading from 0.98.x to 1.x
=== Upgrading to 1.x
In this section we first note the significant changes that come in with 1.0.0+ HBase and then we go over the upgrade process. Be sure to read the significant changes section with care so you avoid surprises.
Please consult the documentation published specifically for the version of HBase that you are upgrading to for details on the upgrade process.
==== Changes of Note!
[[upgrade2.0]]
=== Upgrading to 2.x
In here we list important changes that are in 1.0.0+ since 0.98.x., changes you should be aware that will go into effect once you upgrade.
[[zookeeper.3.4]]
.ZooKeeper 3.4 is required in HBase 1.0.0+
See <<zookeeper.requirements>>.
[[default.ports.changed]]
.HBase Default Ports Changed
The ports used by HBase changed. They used to be in the 600XX range. In HBase 1.0.0 they have been moved up out of the ephemeral port range and are 160XX instead (Master web UI was 60010 and is now 16010; the RegionServer web UI was 60030 and is now 16030, etc.). If you want to keep the old port locations, copy the port setting configs from _hbase-default.xml_ into _hbase-site.xml_, change them back to the old values from the HBase 0.98.x era, and ensure you've distributed your configurations before you restart.
.HBase Master Port Binding Change
In HBase 1.0.x, the HBase Master binds the RegionServer ports as well as the Master
ports. This behavior is changed from HBase versions prior to 1.0. In HBase 1.1 and 2.0 branches,
this behavior is reverted to the pre-1.0 behavior of the HBase master not binding the RegionServer
ports.
[[upgrade1.0.hbase.bucketcache.percentage.in.combinedcache]]
.hbase.bucketcache.percentage.in.combinedcache configuration has been REMOVED
You may have made use of this configuration if you are using BucketCache. If NOT using BucketCache, this change does not affect you. Its removal means that your L1 LruBlockCache is now sized using `hfile.block.cache.size` -- i.e. the way you would size the on-heap L1 LruBlockCache if you were NOT doing BucketCache -- and the BucketCache size is not whatever the setting for `hbase.bucketcache.size` is. You may need to adjust configs to get the LruBlockCache and BucketCache sizes set to what they were in 0.98.x and previous. If you did not set this config., its default value was 0.9. If you do nothing, your BucketCache will increase in size by 10%. Your L1 LruBlockCache will become `hfile.block.cache.size` times your java heap size (`hfile.block.cache.size` is a float between 0.0 and 1.0). To read more, see link:https://issues.apache.org/jira/browse/HBASE-11520[HBASE-11520 Simplify offheap cache config by removing the confusing "hbase.bucketcache.percentage.in.combinedcache"].
[[hbase-12068]]
.If you have your own customer filters.
See the release notes on the issue link:https://issues.apache.org/jira/browse/HBASE-12068[HBASE-12068 [Branch-1\] Avoid need to always do KeyValueUtil#ensureKeyValue for Filter transformCell]; be sure to follow the recommendations therein.
.Mismatch Of `hbase.client.scanner.max.result.size` Between Client and Server
If either the client or server version is lower than 0.98.11/1.0.0 and the server
has a smaller value for `hbase.client.scanner.max.result.size` than the client, scan
requests that reach the server's `hbase.client.scanner.max.result.size` are likely
to miss data. In particular, 0.98.11 defaults `hbase.client.scanner.max.result.size`
to 2 MB but other versions default to larger values. For this reason, be very careful
using 0.98.11 servers with any other client version.
.Availability of Date Tiered Compaction.
The Date Tiered Compaction feature available as of 0.98.19 is available in the 1.y release line starting in release 1.3.0. If you have enabled this feature for any tables you must upgrade to version 1.3.0 or later. If you attempt to use an earlier 1.y release, any tables configured to use date tiered compaction will fail to have their regions open.
[[upgrade1.0.rolling.upgrade]]
==== Rolling upgrade from 0.98.x to HBase 1.0.0
.From 0.96.x to 1.0.0
NOTE: You cannot do a <<hbase.rolling.upgrade,rolling upgrade>> from 0.96.x to 1.0.0 without first doing a rolling upgrade to 0.98.x. See comment in link:https://issues.apache.org/jira/browse/HBASE-11164?focusedCommentId=14182330&amp;page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&#35;comment-14182330[HBASE-11164 Document and test rolling updates from 0.98 -> 1.0] for the why. Also because HBase 1.0.0 enables HFile v3 by default, link:https://issues.apache.org/jira/browse/HBASE-9801[HBASE-9801 Change the default HFile version to V3], and support for HFile v3 only arrives in 0.98, this is another reason you cannot rolling upgrade from HBase 0.96.x; if the rolling upgrade stalls, the 0.96.x servers cannot open files written by the servers running the newer HBase 1.0.0 with HFile's of version 3.
There are no known issues running a <<hbase.rolling.upgrade,rolling upgrade>> from HBase 0.98.x to HBase 1.0.0.
[[upgrade1.0.scanner.caching]]
==== Scanner Caching has Changed
.From 0.98.x to 1.x
In hbase-1.x, the default Scan caching 'number of rows' changed.
Where in 0.98.x, it defaulted to 100, in later HBase versions, the
default became Integer.MAX_VALUE. Not setting a cache size can make
for Scans that run for a long time server-side, especially if
they are running with stringent filtering. See
link:https://issues.apache.org/jira/browse/HBASE-16973[Revisiting default value for hbase.client.scanner.caching];
for further discussion.
[[upgrade1.0.from.0.94]]
==== Upgrading to 1.0 from 0.94
You cannot rolling upgrade from 0.94.x to 1.x.x. You must stop your cluster, install the 1.x.x software, run the migration described at <<executing.the.0.96.upgrade>> (substituting 1.x.x. wherever we make mention of 0.96.x in the section below), and then restart. Be sure to upgrade your ZooKeeper if it is a version less than the required 3.4.x.
[[upgrade0.98]]
=== Upgrading from 0.96.x to 0.98.x
A rolling upgrade from 0.96.x to 0.98.x works. The two versions are not binary compatible.
Additional steps are required to take advantage of some of the new features of 0.98.x, including cell visibility labels, cell ACLs, and transparent server side encryption. See <<security>> for more information. Significant performance improvements include a change to the write ahead log threading model that provides higher transaction throughput under high load, reverse scanners, MapReduce over snapshot files, and striped compaction.
Clients and servers can run with 0.98.x and 0.96.x versions. However, applications may need to be recompiled due to changes in the Java API.
=== Upgrading from 0.94.x to 0.98.x
A rolling upgrade from 0.94.x directly to 0.98.x does not work. The upgrade path follows the same procedures as <<upgrade0.96>>. Additional steps are required to use some of the new features of 0.98.x. See <<upgrade0.98>> for an abbreviated list of these features.
[[upgrade0.96]]
=== Upgrading from 0.94.x to 0.96.x
==== The "Singularity"
You will have to stop your old 0.94.x cluster completely to upgrade. If you are replicating between clusters, both clusters will have to go down to upgrade. Make sure it is a clean shutdown. The less WAL files around, the faster the upgrade will run (the upgrade will split any log files it finds in the filesystem as part of the upgrade process). All clients must be upgraded to 0.96 too.
The API has changed. You will need to recompile your code against 0.96 and you may need to adjust applications to go against new APIs (TODO: List of changes).
[[executing.the.0.96.upgrade]]
==== Executing the 0.96 Upgrade
.HDFS and ZooKeeper must be up!
NOTE: HDFS and ZooKeeper should be up and running during the upgrade process.
HBase 0.96.0 comes with an upgrade script. Run
[source,bash]
----
$ bin/hbase upgrade
----
to see its usage. The script has two main modes: `-check`, and `-execute`.
.check
The check step is run against a running 0.94 cluster. Run it from a downloaded 0.96.x binary. The check step is looking for the presence of HFile v1 files. These are unsupported in HBase 0.96.0. To have them rewritten as HFile v2 you must run a compaction.
The check step prints stats at the end of its run (grep for `“Result:”` in the log) printing absolute path of the tables it scanned, any HFile v1 files found, the regions containing said files (these regions will need a major compaction), and any corrupted files if found. A corrupt file is unreadable, and so is undefined (neither HFile v1 nor HFile v2).
To run the check step, run
[source,bash]
----
$ bin/hbase upgrade -check
----
Here is sample output:
----
Tables Processed:
hdfs://localhost:41020/myHBase/.META.
hdfs://localhost:41020/myHBase/usertable
hdfs://localhost:41020/myHBase/TestTable
hdfs://localhost:41020/myHBase/t
Count of HFileV1: 2
HFileV1:
hdfs://localhost:41020/myHBase/usertable /fa02dac1f38d03577bd0f7e666f12812/family/249450144068442524
hdfs://localhost:41020/myHBase/usertable /ecdd3eaee2d2fcf8184ac025555bb2af/family/249450144068442512
Count of corrupted files: 1
Corrupted Files:
hdfs://localhost:41020/myHBase/usertable/fa02dac1f38d03577bd0f7e666f12812/family/1
Count of Regions with HFileV1: 2
Regions to Major Compact:
hdfs://localhost:41020/myHBase/usertable/fa02dac1f38d03577bd0f7e666f12812
hdfs://localhost:41020/myHBase/usertable/ecdd3eaee2d2fcf8184ac025555bb2af
There are some HFileV1, or corrupt files (files with incorrect major version)
----
In the above sample output, there are two HFile v1 files in two regions, and one corrupt file. Corrupt files should probably be removed. The regions that have HFile v1s need to be major compacted. To major compact, start up the hbase shell and review how to compact an individual region. After the major compaction is done, rerun the check step and the HFile v1 files should be gone, replaced by HFile v2 instances.
By default, the check step scans the HBase root directory (defined as `hbase.rootdir` in the configuration). To scan a specific directory only, pass the `-dir` option.
[source,bash]
----
$ bin/hbase upgrade -check -dir /myHBase/testTable
----
The above command would detect HFile v1 files in the _/myHBase/testTable_ directory.
Once the check step reports all the HFile v1 files have been rewritten, it is safe to proceed with the upgrade.
.execute
After the _check_ step shows the cluster is free of HFile v1, it is safe to proceed with the upgrade. Next is the _execute_ step. You must *SHUTDOWN YOUR 0.94.x CLUSTER* before you can run the execute step. The execute step will not run if it detects running HBase masters or RegionServers.
[NOTE]
====
HDFS and ZooKeeper should be up and running during the upgrade process. If zookeeper is managed by HBase, then you can start zookeeper so it is available to the upgrade by running
[source,bash]
----
$ ./hbase/bin/hbase-daemon.sh start zookeeper
----
====
The execute upgrade step is made of three substeps.
* Namespaces: HBase 0.96.0 has support for namespaces. The upgrade needs to reorder directories in the filesystem for namespaces to work.
* ZNodes: All znodes are purged so that new ones can be written in their place using a new protobuf'ed format and a few are migrated in place: e.g. replication and table state znodes
* WAL Log Splitting: If the 0.94.x cluster shutdown was not clean, we'll split WAL logs as part of migration before we startup on 0.96.0. This WAL splitting runs slower than the native distributed WAL splitting because it is all inside the single upgrade process (so try and get a clean shutdown of the 0.94.0 cluster if you can).
To run the _execute_ step, make sure that first you have copied HBase 0.96.0 binaries everywhere under servers and under clients. Make sure the 0.94.0 cluster is down. Then do as follows:
[source,bash]
----
$ bin/hbase upgrade -execute
----
Here is some sample output.
----
Starting Namespace upgrade
Created version file at hdfs://localhost:41020/myHBase with version=7
Migrating table testTable to hdfs://localhost:41020/myHBase/.data/default/testTable
.....
Created version file at hdfs://localhost:41020/myHBase with version=8
Successfully completed NameSpace upgrade.
Starting Znode upgrade
.....
Successfully completed Znode upgrade
Starting Log splitting
...
Successfully completed Log splitting
----
If the output from the execute step looks good, stop the zookeeper instance you started to do the upgrade:
[source,bash]
----
$ ./hbase/bin/hbase-daemon.sh stop zookeeper
----
Now start up hbase-0.96.0.
[[s096.migration.troubleshooting]]
=== Troubleshooting
[[s096.migration.troubleshooting.old.client]]
.Old Client connecting to 0.96 cluster
It will fail with an exception like the below. Upgrade.
----
17:22:15 Exception in thread "main" java.lang.IllegalArgumentException: Not a host:port pair: PBUF
17:22:15 *
17:22:15 api-compat-8.ent.cloudera.com <20><> <20><><EFBFBD>(
17:22:15 at org.apache.hadoop.hbase.util.Addressing.parseHostname(Addressing.java:60)
17:22:15 at org.apache.hadoop.hbase.ServerName.&init>(ServerName.java:101)
17:22:15 at org.apache.hadoop.hbase.ServerName.parseVersionedServerName(ServerName.java:283)
17:22:15 at org.apache.hadoop.hbase.MasterAddressTracker.bytesToServerName(MasterAddressTracker.java:77)
17:22:15 at org.apache.hadoop.hbase.MasterAddressTracker.getMasterAddress(MasterAddressTracker.java:61)
17:22:15 at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getMaster(HConnectionManager.java:703)
17:22:15 at org.apache.hadoop.hbase.client.HBaseAdmin.&init>(HBaseAdmin.java:126)
17:22:15 at Client_4_3_0.setup(Client_4_3_0.java:716)
17:22:15 at Client_4_3_0.main(Client_4_3_0.java:63)
----
==== Upgrading `META` to use Protocol Buffers (Protobuf)
When you upgrade from versions prior to 0.96, `META` needs to be converted to use protocol buffers. This is controlled by the configuration option `hbase.MetaMigrationConvertingToPB`, which is set to `true` by default. Therefore, by default, no action is required on your part.
The migration is a one-time event. However, every time your cluster starts, `META` is scanned to ensure that it does not need to be converted. If you have a very large number of regions, this scan can take a long time. Starting in 0.98.5, you can set `hbase.MetaMigrationConvertingToPB` to `false` in _hbase-site.xml_, to disable this start-up scan. This should be considered an expert-level setting.
[[upgrade0.94]]
=== Upgrading from 0.92.x to 0.94.x
We used to think that 0.92 and 0.94 were interface compatible and that you can do a rolling upgrade between these versions but then we figured that link:https://issues.apache.org/jira/browse/HBASE-5357[HBASE-5357 Use builder pattern in HColumnDescriptor] changed method signatures so rather than return `void` they instead return `HColumnDescriptor`. This will throw `java.lang.NoSuchMethodError: org.apache.hadoop.hbase.HColumnDescriptor.setMaxVersions(I)V` so 0.92 and 0.94 are NOT compatible. You cannot do a rolling upgrade between them.
[[upgrade0.92]]
=== Upgrading from 0.90.x to 0.92.x
==== Upgrade Guide
You will find that 0.92.0 runs a little differently to 0.90.x releases. Here are a few things to watch out for upgrading from 0.90.x to 0.92.0.
.tl:dr
[NOTE]
====
These are the important things to know before upgrading.
. Once you upgrade, you cant go back.
. MSLAB is on by default. Watch that heap usage if you have a lot of regions.
. Distributed Log Splitting is on by default. It should make RegionServer failover faster.
. Theres a separate tarball for security.
. If `-XX:MaxDirectMemorySize` is set in your _hbase-env.sh_, its going to enable the experimental off-heap cache (You may not want this).
====
.You cant go back!
To move to 0.92.0, all you need to do is shutdown your cluster, replace your HBase 0.90.x with HBase 0.92.0 binaries (be sure you clear out all 0.90.x instances) and restart (You cannot do a rolling restart from 0.90.x to 0.92.x -- you must restart). On startup, the `.META.` table content is rewritten removing the table schema from the `info:regioninfo` column. Also, any flushes done post first startup will write out data in the new 0.92.0 file format, <<hfilev2>>. This means you cannot go back to 0.90.x once youve started HBase 0.92.0 over your HBase data directory.
.MSLAB is ON by default
In 0.92.0, the `<<hbase.hregion.memstore.mslab.enabled,hbase.hregion.memstore.mslab.enabled>>` flag is set to `true` (See <<gcpause>>). In 0.90.x it was false. When it is enabled, memstores will step allocate memory in MSLAB 2MB chunks even if the memstore has zero or just a few small elements. This is fine usually but if you had lots of regions per RegionServer in a 0.90.x cluster (and MSLAB was off), you may find yourself OOME'ing on upgrade because the `thousands of regions * number of column families * 2MB MSLAB` (at a minimum) puts your heap over the top. Set `hbase.hregion.memstore.mslab.enabled` to `false` or set the MSLAB size down from 2MB by setting `hbase.hregion.memstore.mslab.chunksize` to something less.
[[dls]]
.Distributed Log Splitting is on by default
Previous, WAL logs on crash were split by the Master alone. In 0.92.0, log splitting is done by the cluster (See link:https://issues.apache.org/jira/browse/hbase-1364[HBASE-1364 [performance\] Distributed splitting of regionserver commit logs] or see the blog post link:http://blog.cloudera.com/blog/2012/07/hbase-log-splitting/[Apache HBase Log Splitting]). This should cut down significantly on the amount of time it takes splitting logs and getting regions back online again.
.Memory accounting is different now
In 0.92.0, <<hfilev2>> indices and bloom filters take up residence in the same LRU used caching blocks that come from the filesystem. In 0.90.x, the HFile v1 indices lived outside of the LRU so they took up space even if the index was on a cold file, one that wasnt being actively used. With the indices now in the LRU, you may find you have less space for block caching. Adjust your block cache accordingly. See the <<block.cache>> for more detail. The block size default size has been changed in 0.92.0 from 0.2 (20 percent of heap) to 0.25.
.On the Hadoop version to use
Run 0.92.0 on Hadoop 1.0.x (or CDH3u3). The performance benefits are worth making the move. Otherwise, our Hadoop prescription is as it has been; you need an Hadoop that supports a working sync. See <<hadoop>>.
If running on Hadoop 1.0.x (or CDH3u3), enable local read. See link:http://files.meetup.com/1350427/hug_ebay_jdcryans.pdf[Practical Caching] presentation for ruminations on the performance benefits going local (and for how to enable local reads).
.HBase 0.92.0 ships with ZooKeeper 3.4.2
If you can, upgrade your ZooKeeper. If you cant, 3.4.2 clients should work against 3.3.X ensembles (HBase makes use of 3.4.2 API).
.Online alter is off by default
In 0.92.0, weve added an experimental online schema alter facility (See <<hbase.online.schema.update.enable,hbase.online.schema.update.enable>>). It's off by default. Enable it at your own risk. Online alter and splitting tables do not play well together so be sure your cluster quiescent using this feature (for now).
.WebUI
The web UI has had a few additions made in 0.92.0. It now shows a list of the regions currently transitioning, recent compactions/flushes, and a process list of running processes (usually empty if all is well and requests are being handled promptly). Other additions including requests by region, a debugging servlet dump, etc.
.Security tarball
We now ship with two tarballs; secure and insecure HBase. Documentation on how to setup a secure HBase is on the way.
.Changes in HBase replication
0.92.0 adds two new features: multi-slave and multi-master replication. The way to enable this is the same as adding a new peer, so in order to have multi-master you would just run add_peer for each cluster that acts as a master to the other slave clusters. Collisions are handled at the timestamp level which may or may not be what you want, this needs to be evaluated on a per use case basis. Replication is still experimental in 0.92 and is disabled by default, run it at your own risk.
.RegionServer now aborts if OOME
If an OOME, we now have the JVM kill -9 the RegionServer process so it goes down fast. Previous, a RegionServer might stick around after incurring an OOME limping along in some wounded state. To disable this facility, and recommend you leave it in place, youd need to edit the bin/hbase file. Look for the addition of the -XX:OnOutOfMemoryError="kill -9 %p" arguments (See link:https://issues.apache.org/jira/browse/HBASE-4769[HBASE-4769 - Abort RegionServer Immediately on OOME]).
.HFile v2 and the “Bigger, Fewer” Tendency
0.92.0 stores data in a new format, <<hfilev2>>. As HBase runs, it will move all your data from HFile v1 to HFile v2 format. This auto-migration will run in the background as flushes and compactions run. HFile v2 allows HBase run with larger regions/files. In fact, we encourage that all HBasers going forward tend toward Facebook axiom #1, run with larger, fewer regions. If you have lots of regions now -- more than 100s per host -- you should look into setting your region size up after you move to 0.92.0 (In 0.92.0, default size is now 1G, up from 256M), and then running online merge tool (See link:https://issues.apache.org/jira/browse/HBASE-1621[HBASE-1621 merge tool should work on online cluster, but disabled table]).
[[upgrade0.90]]
=== Upgrading to HBase 0.90.x from 0.20.x or 0.89.x
This version of 0.90.x HBase can be started on data written by HBase 0.20.x or HBase 0.89.x. There is no need of a migration step. HBase 0.89.x and 0.90.x does write out the name of region directories differently -- it names them with a md5 hash of the region name rather than a jenkins hash -- so this means that once started, there is no going back to HBase 0.20.x.
Be sure to remove the _hbase-default.xml_ from your _conf_ directory on upgrade. A 0.20.x version of this file will have sub-optimal configurations for 0.90.x HBase. The _hbase-default.xml_ file is now bundled into the HBase jar and read from there. If you would like to review the content of this file, see it in the src tree at _src/main/resources/hbase-default.xml_ or see <<hbase_default_configurations>>.
Finally, if upgrading from 0.20.x, check your .META. schema in the shell. In the past we would recommend that users run with a 16kb MEMSTORE_FLUSHSIZE. Run
----
hbase> scan '-ROOT-'
----
in the shell. This will output the current `.META.` schema. Check `MEMSTORE_FLUSHSIZE` size. Is it 16kb (16384)? If so, you will need to change this (The 'normal'/default value is 64MB (67108864)). Run the script `bin/set_meta_memstore_size.rb`. This will make the necessary edit to your `.META.` schema. Failure to run this change will make for a slow cluster. See link:https://issues.apache.org/jira/browse/HBASE-3499[HBASE-3499 Users upgrading to 0.90.0 need to have their .META. table updated with the right MEMSTORE_SIZE].
Coming soon...