HBASE-26933 Remove all ref guide stuff on branch other than master (#4426)

Signed-off-by: Xiaolin Ha <haxiaolin@apache.org>
This commit is contained in:
Duo Zhang 2022-05-22 15:12:52 +08:00 committed by GitHub
parent d8d1089649
commit 778ae2d655
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
56 changed files with 0 additions and 32912 deletions

View File

@ -1753,8 +1753,6 @@
<exclude>**/*.svg</exclude>
<!-- non-standard notice file from jruby included by reference -->
<exclude>**/src/main/resources/META-INF/LEGAL</exclude>
<!-- MIT: https://github.com/asciidoctor/asciidoctor/blob/master/LICENSE.adoc -->
<exclude>**/src/main/asciidoc/hbase.css</exclude>
<!-- MIT https://jquery.org/license -->
<exclude>**/jquery.min.js</exclude>
<exclude>**/jquery.tablesorter.min.js</exclude>

View File

@ -1,173 +0,0 @@
////
/**
*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
////
[[amv2]]
= AMv2 Description for Devs
:doctype: book
:numbered:
:toc: left
:icons: font
:experimental:
The AssignmentManager (AM) in HBase Master manages assignment of Regions over a cluster of RegionServers.
The AMv2 project is a redo of Assignment in an attempt at addressing the root cause of many of our operational issues in production, namely slow assignment and problematic accounting such that Regions are misplaced stuck offline in the notorious _Regions-In-Transition (RIT)_ limbo state.
Below are notes for devs on key aspects of AMv2 in no particular order.
== Background
Assignment in HBase 1.x has been problematic in operation. It is not hard to see why. Region state is kept at the other end of an RPC in ZooKeeper (Terminal states -- i.e. OPEN or CLOSED -- are published to the _hbase:meta_ table). In HBase-1.x.x, state has multiple writers with Master and RegionServers all able to make state edits concurrently (in _hbase:meta_ table and out on ZooKeeper). If clocks are awry or watchers missed, state changes can be skipped or overwritten. Locking of HBase Entities -- tables, regions -- is not comprehensive so a table operation -- disable/enable -- could clash with a region-level operation; a split or merge. Region state is distributed and hard to reason about and test. Assignment is slow in operation because each assign involves moving remote znodes through transitions. Cluster size tends to top out at a couple of hundred thousand regions; beyond this, cluster start/stop takes hours and is prone to corruption.
AMv2 (AssignmentManager Version 2) is a refactor (https://issues.apache.org/jira/browse/HBASE-14350[HBASE-14350]) of the hbase-1.x AssignmentManager putting it up on a https://issues.apache.org/jira/browse/HBASE-12439[ProcedureV2 (HBASE-12439)] basis. ProcedureV2 (Pv2)__,__ is an awkwardly named system that allows describing and running multi-step state machines. It is performant and persists all state to a Store which is recoverable post crash. See the companion chapter on <<pv2>>, to learn more about the ProcedureV2 system.
In AMv2, all assignment, crash handling, splits and merges are recast as Procedures(v2). ZooKeeper is purged from the mix. As before, the final assignment state gets published to _hbase:meta_ for non-Master participants to read (all-clients) with intermediate state kept in the local Pv2 WAL-based store but only the active Master, a single-writer, evolves state. The Masters in-memory cluster image is the authority and if disagreement, RegionServers are forced to comply. Pv2 adds shared/exclusive locking of all core HBase Entities -- namespace, tables, and regions -- to ensure one actor at a time access and to prevent operations contending over resources (move/split, disable/assign, etc.).
This redo of AM atop of a purposed, performant state machine with all operations taking on the common Procedure form with a single state writer only moves our AM to a new level of resilience and scale.
== New System
Each Region Assign or Unassign of a Region is now a Procedure. A Move (Region) Procedure is a compound of Procedures; it is the running of an Unassign Procedure followed by an Assign Procedure. The Move Procedure spawns the Assign and Unassign in series and then waits on their completions.
And so on. ServerCrashProcedure spawns the WAL splitting tasks and then the reassign of all regions that were hosted on the crashed server as subprocedures.
AMv2 Procedures are run by the Master in a ProcedureExecutor instance. All Procedures make use of utility provided by the Pv2 framework.
For example, Procedures persist each state transition to the frameworks Procedure Store. The default implementation is done as a WAL kept on HDFS. On crash, we reopen the Store and rerun all WALs of Procedure transitions to put the Assignment State Machine back into the attitude it had just before crash. We then continue Procedure execution.
In the new system, the Master is the Authority on all things Assign. Previous we were ambiguous; e.g. the RegionServer was in charge of Split operations. Master keeps an in-memory image of Region states and servers. If disagreement, the Master always prevails; at an extreme it will kill the RegionServer that is in disagreement.
A new RegionStateStore class takes care of publishing the terminal Region state, whether OPEN or CLOSED, out to the _hbase:meta _table__.__
RegionServers now report their run version on Connection. This version is available inside the AM for use running migrating rolling restarts.
== Procedures Detail
=== Assign/Unassign
Assign and Unassign subclass a common RegionTransitionProcedure. There can only be one RegionTransitionProcedure per region running at a time since the RTP instance takes a lock on the region. The RTP base Procedure has three steps; a store the procedure step (REGION_TRANSITION_QUEUE); a dispatch of the procedure open or close followed by a suspend waiting on the remote regionserver to report successful open or fail (REGION_TRANSITION_DISPATCH) or notification that the server fielding the request crashed; and finally registration of the successful open/close in hbase:meta (REGION_TRANSITION_FINISH).
Here is how the assign of a region 56f985a727afe80a184dac75fbf6860c looks in the logs. The assign was provoked by a Server Crash (Process ID 1176 or pid=1176 which when it is the parent of a procedure, it is identified as ppid=1176). The assign is pid=1179, the second region of the two being assigned by this Server Crash.
[source]
----
2017-05-23 12:04:24,175 INFO [ProcExecWrkr-30] procedure2.ProcedureExecutor: Initialized subprocedures=[{pid=1178, ppid=1176, state=RUNNABLE:REGION_TRANSITION_QUEUE; AssignProcedure table=IntegrationTestBigLinkedList, region=bfd57f0b72fd3ca77e9d3c5e3ae48d76, target=ve0540.halxg.example.org,16020,1495525111232}, {pid=1179, ppid=1176, state=RUNNABLE:REGION_TRANSITION_QUEUE; AssignProcedure table=IntegrationTestBigLinkedList, region=56f985a727afe80a184dac75fbf6860c, target=ve0540.halxg.example.org,16020,1495525111232}]
----
Next we start the assign by queuing (registering) the Procedure with the framework.
[source]
----
2017-05-23 12:04:24,241 INFO [ProcExecWrkr-30] assignment.AssignProcedure: Start pid=1179, ppid=1176, state=RUNNABLE:REGION_TRANSITION_QUEUE; AssignProcedure table=IntegrationTestBigLinkedList, region=56f985a727afe80a184dac75fbf6860c, target=ve0540.halxg.example.org,16020,1495525111232; rit=OFFLINE, location=ve0540.halxg.example.org,16020,1495525111232; forceNewPlan=false, retain=false
----
Track the running of Procedures in logs by tracing their process id -- here pid=1179.
Next we move to the dispatch phase where we update hbase:meta table setting the region state as OPENING on server ve540. We then dispatch an rpc to ve540 asking it to open the region. Thereafter we suspend the Assign until we get a message back from ve540 on whether it has opened the region successfully (or not).
[source]
----
2017-05-23 12:04:24,494 INFO [ProcExecWrkr-38] assignment.RegionStateStore: pid=1179 updating hbase:meta row=IntegrationTestBigLinkedList,H\xE3@\x8D\x964\x9D\xDF\x8F@9\x0F\xC8\xCC\xC2,1495566261066.56f985a727afe80a184dac75fbf6860c., regionState=OPENING, regionLocation=ve0540.halxg.example.org,16020,1495525111232
2017-05-23 12:04:24,498 INFO [ProcExecWrkr-38] assignment.RegionTransitionProcedure: Dispatch pid=1179, ppid=1176, state=RUNNABLE:REGION_TRANSITION_DISPATCH; AssignProcedure table=IntegrationTestBigLinkedList, region=56f985a727afe80a184dac75fbf6860c, target=ve0540.halxg.example.org,16020,1495525111232; rit=OPENING, location=ve0540.halxg.example.org,16020,1495525111232
----
Below we log the incoming report that the region opened successfully on ve540. The Procedure is woken up (you can tell it the procedure is running by the name of the thread, its a ProcedureExecutor thread, ProcExecWrkr-9). The woken up Procedure updates state in hbase:meta to denote the region as open on ve0540. It then reports finished and exits.
[source]
----
2017-05-23 12:04:26,643 DEBUG [RpcServer.default.FPBQ.Fifo.handler=46,queue=1,port=16000] assignment.RegionTransitionProcedure: Received report OPENED seqId=11984985, pid=1179, ppid=1176, state=RUNNABLE:REGION_TRANSITION_DISPATCH; AssignProcedure table=IntegrationTestBigLinkedList, region=56f985a727afe80a184dac75fbf6860c, target=ve0540.halxg.example.org,16020,1495525111232; rit=OPENING, location=ve0540.halxg.example.org,16020,1495525111232 2017-05-23 12:04:26,643 INFO [ProcExecWrkr-9] assignment.RegionStateStore: pid=1179 updating hbase:meta row=IntegrationTestBigLinkedList,H\xE3@\x8D\x964\x9D\xDF\x8F@9\x0F\xC8\xCC\xC2,1495566261066.56f985a727afe80a184dac75fbf6860c., regionState=OPEN, openSeqNum=11984985, regionLocation=ve0540.halxg.example.org,16020,1495525111232
2017-05-23 12:04:26,836 INFO [ProcExecWrkr-9] procedure2.ProcedureExecutor: Finish suprocedure pid=1179, ppid=1176, state=SUCCESS; AssignProcedure table=IntegrationTestBigLinkedList, region=56f985a727afe80a184dac75fbf6860c, target=ve0540.halxg.example.org,16020,1495525111232
----
Unassign looks similar given it is based on the base RegionTransitionProcedure. It has the same state transitions and does basically the same steps but with different state name (CLOSING, CLOSED).
Most other procedures are subclasses of a Pv2 StateMachine implementation. We have both Table and Region focused StateMachines types.
== UI
Along the top-bar on the Master, you can now find a Procedures&Locks tab which takes you to a page that is ugly but useful. It dumps currently running procedures and framework locks. Look at this when you cant figure what stuff is stuck; it will at least identify problematic procedures (take the pid and grep the logs…). Look for ROLLEDBACK or pids that have been RUNNING for a long time.
== Logging
Procedures log their process ids as pid= and their parent ids (ppid=) everywhere. Work has been done so you can grep the pid and see history of a procedure operation.
== Implementation Notes
In this section we note some idiosyncrasies of operation as an attempt at saving you some head-scratching.
=== Region Transition RPC and RS Heartbeat can arrive at ~same time on Master
Reporting Region Transition on a RegionServer is now a RPC distinct from RS heartbeating (RegionServerServices Service). An heartbeat and a status update can arrive at the Master at about the same time. The Master will update its internal state for a Region but this same state is checked when heartbeat processing. We may find the unexpected; i.e. a Region just reported as CLOSED so heartbeat is surprised to find region OPEN on the back of the RS report. In the new system, all slaves must cow to the Masters understanding of cluster state; the Master will kill/close any misaligned entities.
To address the above, we added a lastUpdate for in-memory Master state. Let a region state have some vintage before we act on it (one second currently).
=== Master as RegionServer or as RegionServer that just does system tables
AMv2 enforces current master branch default of HMaster carrying system tables only; i.e. the Master in an HBase cluster acts also as a RegionServer only it is the exclusive host for tables such as _hbase:meta_, _hbase:namespace_, etc., the core system tables. This is causing a couple of test failures as AMv1, though it is not supposed to, allows moving hbase:meta off Master while AMv2 does not.
== New Configs
These configs all need doc on when youd change them.
=== hbase.procedure.remote.dispatcher.threadpool.size
Defaults 128
=== hbase.procedure.remote.dispatcher.delay.msec
Default 150ms
=== hbase.procedure.remote.dispatcher.max.queue.size
Default 32
=== hbase.regionserver.rpc.startup.waittime
Default 60 seconds.
== Tools
HBASE-15592 Print Procedure WAL Content
Patch in https://issues.apache.org/jira/browse/HBASE-18152[HBASE-18152] [AMv2] Corrupt Procedure WAL file; procedure data stored out of order https://issues.apache.org/jira/secure/attachment/12871066/reading_bad_wal.patch[https://issues.apache.org/jira/secure/attachment/12871066/reading_bad_wal.patch]
=== MasterProcedureSchedulerPerformanceEvaluation
Tool to test performance of locks and queues in procedure scheduler independently from other framework components. Run this after any substantial changes in proc system. Prints nice output:
----
******************************************
Time - addBack : 5.0600sec
Ops/sec - addBack : 1.9M
Time - poll : 19.4590sec
Ops/sec - poll : 501.9K
Num Operations : 10000000
Completed : 10000006
Yield : 22025876
Num Tables : 5
Regions per table : 10
Operations type : both
Threads : 10
******************************************
Raw format for scripts
RESULT [num_ops=10000000, ops_type=both, num_table=5, regions_per_table=10, threads=10, num_yield=22025876, time_addback_ms=5060, time_poll_ms=19459]
----

View File

@ -1,181 +0,0 @@
////
/**
*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
////
[appendix]
[[appendix_acl_matrix]]
== Access Control Matrix
:doctype: book
:numbered:
:toc: left
:icons: font
:experimental:
:toc: left
:source-language: java
The following matrix shows the permission set required to perform operations in HBase.
Before using the table, read through the information about how to interpret it.
.Interpreting the ACL Matrix Table
The following conventions are used in the ACL Matrix table:
=== Scopes
Permissions are evaluated starting at the widest scope and working to the narrowest scope.
A scope corresponds to a level of the data model. From broadest to narrowest, the scopes are as follows:
.Scopes
* Global
* Namespace (NS)
* Table
* Column Family (CF)
* Column Qualifier (CQ)
* Cell
For instance, a permission granted at table level dominates any grants done at the Column Family, Column Qualifier, or cell level. The user can do what that grant implies at any location in the table. A permission granted at global scope dominates all: the user is always allowed to take that action everywhere.
=== Permissions
Possible permissions include the following:
.Permissions
* Superuser - a special user that belongs to group "supergroup" and has unlimited access
* Admin (A)
* Create \(C)
* Write (W)
* Read \(R)
* Execute (X)
For the most part, permissions work in an expected way, with the following caveats:
Having Write permission does not imply Read permission.::
It is possible and sometimes desirable for a user to be able to write data that same user cannot read. One such example is a log-writing process.
The [systemitem]+hbase:meta+ table is readable by every user, regardless of the user's other grants or restrictions.::
This is a requirement for HBase to function correctly.
`CheckAndPut` and `CheckAndDelete` operations will fail if the user does not have both Write and Read permission.::
`Increment` and `Append` operations do not require Read access.::
The `superuser`, as the name suggests has permissions to perform all possible operations.::
And for the operations marked with *, the checks are done in post hook and only subset of results satisfying access checks are returned back to the user.::
The following table is sorted by the interface that provides each operation.
In case the table goes out of date, the unit tests which check for accuracy of permissions can be found in _hbase-server/src/test/java/org/apache/hadoop/hbase/security/access/TestAccessController.java_, and the access controls themselves can be examined in _hbase-server/src/main/java/org/apache/hadoop/hbase/security/access/AccessController.java_.
.ACL Matrix
[cols="1,1,1", frame="all", options="header"]
|===
| Interface | Operation | Permissions
| Master | createTable | superuser\|global\(C)\|NS\(C)
| | modifyTable | superuser\|global(A)\|global\(C)\|NS(A)\|NS\(C)\|TableOwner\|table(A)\|table\(C)
| | deleteTable | superuser\|global(A)\|global\(C)\|NS(A)\|NS\(C)\|TableOwner\|table(A)\|table\(C)
| | truncateTable | superuser\|global(A)\|global\(C)\|NS(A)\|NS\(C)\|TableOwner\|table(A)\|table\(C)
| | addColumn | superuser\|global(A)\|global\(C)\|NS(A)\|NS\(C)\|TableOwner\|table(A)\|table\(C)
| | modifyColumn | superuser\|global(A)\|global\(C)\|NS(A)\|NS\(C)\|TableOwner\|table(A)\|table\(C)\|column(A)\|column\(C)
| | deleteColumn | superuser\|global(A)\|global\(C)\|NS(A)\|NS\(C)\|TableOwner\|table(A)\|table\(C)\|column(A)\|column\(C)
| | enableTable | superuser\|global(A)\|global\(C)\|NS(A)\|NS\(C)\|TableOwner\|table(A)\|table\(C)
| | disableTable | superuser\|global(A)\|global\(C)\|NS(A)\|NS\(C)\|TableOwner\|table(A)\|table\(C)
| | disableAclTable | Not allowed
| | move | superuser\|global(A)\|NS(A)\|TableOwner\|table(A)
| | assign | superuser\|global(A)\|NS(A)\|TableOwner\|table(A)
| | unassign | superuser\|global(A)\|NS(A)\|TableOwner\|table(A)
| | regionOffline | superuser\|global(A)\|NS(A)\|TableOwner\|table(A)
| | balance | superuser\|global(A)
| | balanceSwitch | superuser\|global(A)
| | shutdown | superuser\|global(A)
| | stopMaster | superuser\|global(A)
| | snapshot | superuser\|global(A)\|NS(A)\|TableOwner\|table(A)
| | listSnapshot | superuser\|global(A)\|SnapshotOwner
| | cloneSnapshot | superuser\|global(A)\|(SnapshotOwner & TableName matches)
| | restoreSnapshot | superuser\|global(A)\|SnapshotOwner & (NS(A)\|TableOwner\|table(A))
| | deleteSnapshot | superuser\|global(A)\|SnapshotOwner
| | createNamespace | superuser\|global(A)
| | deleteNamespace | superuser\|global(A)
| | modifyNamespace | superuser\|global(A)
| | getNamespaceDescriptor | superuser\|global(A)\|NS(A)
| | listNamespaceDescriptors* | superuser\|global(A)\|NS(A)
| | flushTable | superuser\|global(A)\|global\(C)\|NS(A)\|NS\(C)\|TableOwner\|table(A)\|table\(C)
| | getTableDescriptors* | superuser\|global(A)\|global\(C)\|NS(A)\|NS\(C)\|TableOwner\|table(A)\|table\(C)
| | getTableNames* | superuser\|TableOwner\|Any global or table perm
| | setUserQuota(global level) | superuser\|global(A)
| | setUserQuota(namespace level) | superuser\|global(A)
| | setUserQuota(Table level) | superuser\|global(A)\|NS(A)\|TableOwner\|table(A)
| | setTableQuota | superuser\|global(A)\|NS(A)\|TableOwner\|table(A)
| | setNamespaceQuota | superuser\|global(A)
| | addReplicationPeer | superuser\|global(A)
| | removeReplicationPeer | superuser\|global(A)
| | enableReplicationPeer | superuser\|global(A)
| | disableReplicationPeer | superuser\|global(A)
| | getReplicationPeerConfig | superuser\|global(A)
| | updateReplicationPeerConfig | superuser\|global(A)
| | listReplicationPeers | superuser\|global(A)
| | getClusterStatus | any user
| Region | openRegion | superuser\|global(A)
| | closeRegion | superuser\|global(A)
| | flush | superuser\|global(A)\|global\(C)\|TableOwner\|table(A)\|table\(C)
| | split | superuser\|global(A)\|TableOwner\|TableOwner\|table(A)
| | compact | superuser\|global(A)\|global\(C)\|TableOwner\|table(A)\|table\(C)
| | getClosestRowBefore | superuser\|global\(R)\|NS\(R)\|TableOwner\|table\(R)\|CF\(R)\|CQ\(R)
| | getOp | superuser\|global\(R)\|NS\(R)\|TableOwner\|table\(R)\|CF\(R)\|CQ\(R)
| | exists | superuser\|global\(R)\|NS\(R)\|TableOwner\|table\(R)\|CF\(R)\|CQ\(R)
| | put | superuser\|global(W)\|NS(W)\|table(W)\|TableOwner\|CF(W)\|CQ(W)
| | delete | superuser\|global(W)\|NS(W)\|table(W)\|TableOwner\|CF(W)\|CQ(W)
| | batchMutate | superuser\|global(W)\|NS(W)\|TableOwner\|table(W)\|CF(W)\|CQ(W)
| | checkAndPut | superuser\|global(RW)\|NS(RW)\|TableOwner\|table(RW)\|CF(RW)\|CQ(RW)
| | checkAndPutAfterRowLock | superuser\|global\(R)\|NS\(R)\|TableOwner\|Table\(R)\|CF\(R)\|CQ\(R)
| | checkAndDelete | superuser\|global(RW)\|NS(RW)\|TableOwner\|table(RW)\|CF(RW)\|CQ(RW)
| | checkAndDeleteAfterRowLock | superuser\|global\(R)\|NS\(R)\|TableOwner\|table\(R)\|CF\(R)\|CQ\(R)
| | incrementColumnValue | superuser\|global(W)\|NS(W)\|TableOwner\|table(W)\|CF(W)\|CQ(W)
| | append | superuser\|global(W)\|NS(W)\|TableOwner\|table(W)\|CF(W)\|CQ(W)
| | appendAfterRowLock | superuser\|global(W)\|NS(W)\|TableOwner\|table(W)\|CF(W)\|CQ(W)
| | increment | superuser\|global(W)\|NS(W)\|TableOwner\|table(W)\|CF(W)\|CQ(W)
| | incrementAfterRowLock | superuser\|global(W)\|NS(W)\|TableOwner\|table(W)\|CF(W)\|CQ(W)
| | scannerOpen | superuser\|global\(R)\|NS\(R)\|TableOwner\|table\(R)\|CF\(R)\|CQ\(R)
| | scannerNext | superuser\|global\(R)\|NS\(R)\|TableOwner\|table\(R)\|CF\(R)\|CQ\(R)
| | scannerClose | superuser\|global\(R)\|NS\(R)\|TableOwner\|table\(R)\|CF\(R)\|CQ\(R)
| | bulkLoadHFile | superuser\|global\(C)\|TableOwner\|table\(C)\|CF\(C)
| | prepareBulkLoad | superuser\|global\(C)\|TableOwner\|table\(C)\|CF\(C)
| | cleanupBulkLoad | superuser\|global\(C)\|TableOwner\|table\(C)\|CF\(C)
| Endpoint | invoke | superuser\|global(X)\|NS(X)\|TableOwner\|table(X)
| AccessController | grant(global level) | global(A)
| | grant(namespace level) | global(A)\|NS(A)
| | grant(table level) | global(A)\|NS(A)\|TableOwner\|table(A)\|CF(A)\|CQ(A)
| | revoke(global level) | global(A)
| | revoke(namespace level) | global(A)\|NS(A)
| | revoke(table level) | global(A)\|NS(A)\|TableOwner\|table(A)\|CF(A)\|CQ(A)
| | getUserPermissions(global level) | global(A)
| | getUserPermissions(namespace level) | global(A)\|NS(A)
| | getUserPermissions(table level) | global(A)\|NS(A)\|TableOwner\|table(A)\|CF(A)\|CQ(A)
| | hasPermission(table level) | global(A)\|SelfUserCheck
| RegionServer | stopRegionServer | superuser\|global(A)
| | mergeRegions | superuser\|global(A)
| | rollWALWriterRequest | superuser\|global(A)
| | replicateLogEntries | superuser\|global(W)
|RSGroup |addRSGroup |superuser\|global(A)
| |balanceRSGroup |superuser\|global(A)
| |getRSGroupInfo |superuser\|global(A)
| |getRSGroupInfoOfTable|superuser\|global(A)
| |getRSGroupOfServer |superuser\|global(A)
| |listRSGroups |superuser\|global(A)
| |moveServers |superuser\|global(A)
| |moveServersAndTables |superuser\|global(A)
| |moveTables |superuser\|global(A)
| |removeRSGroup |superuser\|global(A)
| |removeServers |superuser\|global(A)
|===
:numbered:

View File

@ -1,441 +0,0 @@
////
/**
*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
////
[appendix]
[[appendix_contributing_to_documentation]]
== Contributing to Documentation
:doctype: book
:numbered:
:toc: left
:icons: font
:experimental:
:toc: left
:source-language: java
The Apache HBase project welcomes contributions to all aspects of the project,
including the documentation.
In HBase, documentation includes the following areas, and probably some others:
* The link:https://hbase.apache.org/book.html[HBase Reference
Guide] (this book)
* The link:https://hbase.apache.org/[HBase website]
* API documentation
* Command-line utility output and help text
* Web UI strings, explicit help text, context-sensitive strings, and others
* Log messages
* Comments in source files, configuration files, and others
* Localization of any of the above into target languages other than English
No matter which area you want to help out with, the first step is almost always
to download (typically by cloning the Git repository) and familiarize yourself
with the HBase source code. For information on downloading and building the source,
see <<developer,developer>>.
=== Contributing to Documentation or Other Strings
If you spot an error in a string in a UI, utility, script, log message, or elsewhere,
or you think something could be made more clear, or you think text needs to be added
where it doesn't currently exist, the first step is to file a JIRA. Be sure to set
the component to `Documentation` in addition to any other involved components. Most
components have one or more default owners, who monitor new issues which come into
those queues. Regardless of whether you feel able to fix the bug, you should still
file bugs where you see them.
If you want to try your hand at fixing your newly-filed bug, assign it to yourself.
You will need to clone the HBase Git repository to your local system and work on
the issue there. When you have developed a potential fix, submit it for review.
If it addresses the issue and is seen as an improvement, one of the HBase committers
will commit it to one or more branches, as appropriate.
[[submit_doc_patch_procedure]]
.Procedure: Suggested Work flow for Submitting Patches
This procedure goes into more detail than Git pros will need, but is included
in this appendix so that people unfamiliar with Git can feel confident contributing
to HBase while they learn.
. If you have not already done so, clone the Git repository locally.
You only need to do this once.
. Fairly often, pull remote changes into your local repository by using the
`git pull` command, while your tracking branch is checked out.
. For each issue you work on, create a new branch.
One convention that works well for naming the branches is to name a given branch
the same as the JIRA it relates to:
+
----
$ git checkout -b HBASE-123456
----
. Make your suggested changes on your branch, committing your changes to your
local repository often. If you need to switch to working on a different issue,
remember to check out the appropriate branch.
. When you are ready to submit your patch, first be sure that HBase builds cleanly
and behaves as expected in your modified branch.
. If you have made documentation changes, be sure the documentation and website
builds by running `mvn clean site`.
. If it takes you several days or weeks to implement your fix, or you know that
the area of the code you are working in has had a lot of changes lately, make
sure you rebase your branch against the remote master and take care of any conflicts
before submitting your patch.
+
----
$ git checkout HBASE-123456
$ git rebase origin/master
----
. Generate your patch against the remote master. Run the following command from
the top level of your git repository (usually called `hbase`):
+
----
$ git format-patch --stdout origin/master > HBASE-123456.patch
----
+
The name of the patch should contain the JIRA ID.
. Look over the patch file to be sure that you did not change any additional files
by accident and that there are no other surprises.
. When you are satisfied, attach the patch to the JIRA and click the
btn:[Patch Available] button. A reviewer will review your patch.
. If you need to submit a new version of the patch, leave the old one on the
JIRA and add a version number to the name of the new patch.
. After a change has been committed, there is no need to keep your local branch around.
=== Editing the HBase Website
The source for the HBase website is in the HBase source, in the _src/site/_ directory.
Within this directory, source for the individual pages is in the _xdocs/_ directory,
and images referenced in those pages are in the _resources/images/_ directory.
This directory also stores images used in the HBase Reference Guide.
The website's pages are written in an HTML-like XML dialect called xdoc, which
has a reference guide at
https://maven.apache.org/archives/maven-1.x/plugins/xdoc/reference/xdocs.html.
You can edit these files in a plain-text editor, an IDE, or an XML editor such
as XML Mind XML Editor (XXE) or Oxygen XML Author.
To preview your changes, build the website using the `mvn clean site -DskipTests`
command. The HTML output resides in the _target/site/_ directory.
When you are satisfied with your changes, follow the procedure in
<<submit_doc_patch_procedure,submit doc patch procedure>> to submit your patch.
[[website_publish]]
=== Publishing the HBase Website and Documentation
HBase uses the ASF's `gitpubsub` mechanism. A Jenkins job runs the
`dev-support/jenkins-scripts/generate-hbase-website.sh` script, which runs the
`mvn clean site site:stage` against the `master` branch of the `hbase`
repository and commits the built artifacts to the `asf-site` branch of the
`hbase-site` repository. When the commit is pushed, the website is redeployed
automatically. If the script encounters an error, an email is sent to the
developer mailing list. You can run the script manually or examine it to see the
steps involved.
[[website_check_links]]
=== Checking the HBase Website for Broken Links
A Jenkins job runs periodically to check HBase website for broken links, using
the `dev-support/jenkins-scripts/check-website-links.sh` script. This script
uses a tool called `linklint` to check for bad links and create a report. If
broken links are found, an email is sent to the developer mailing list. You can
run the script manually or examine it to see the steps involved.
=== HBase Reference Guide Style Guide and Cheat Sheet
The HBase Reference Guide is written in Asciidoc and built using link:http://asciidoctor.org[AsciiDoctor].
The following cheat sheet is included for your reference. More nuanced and comprehensive documentation
is available at http://asciidoctor.org/docs/user-manual/.
.AsciiDoc Cheat Sheet
[cols="1,1,a",options="header"]
|===
| Element Type | Desired Rendering | How to do it
| A paragraph | a paragraph | Just type some text with a blank line at the top and bottom.
| Add line breaks within a paragraph without adding blank lines | Manual line breaks | This will break + at the plus sign. Or prefix the whole paragraph with a line containing '[%hardbreaks]'
| Give a title to anything | Colored italic bold differently-sized text | .MyTitle (no space between the period and the words) on the line before the thing to be titled
| In-Line Code or commands | monospace | \`text`
| In-line literal content (things to be typed exactly as shown) | bold mono | \*\`typethis`*
| In-line replaceable content (things to substitute with your own values) | bold italic mono | \*\_typesomething_*
| Code blocks with highlighting | monospace, highlighted, preserve space |
........
[source,java]
----
myAwesomeCode() {
}
----
........
| Code block included from a separate file | included just as though it were part of the main file |
................
[source,ruby]
----
include\::path/to/app.rb[]
----
................
| Include only part of a separate file | Similar to Javadoc
| See http://asciidoctor.org/docs/user-manual/#by-tagged-regions
| Filenames, directory names, new terms | italic | \_hbase-default.xml_
| External naked URLs | A link with the URL as link text |
----
link:http://www.google.com
----
| External URLs with text | A link with arbitrary link text |
----
link:http://www.google.com[Google]
----
| Create an internal anchor to cross-reference | not rendered |
----
[[anchor_name]]
----
| Cross-reference an existing anchor using its default title| an internal hyperlink using the element title if available, otherwise using the anchor name |
----
<<anchor_name>>
----
| Cross-reference an existing anchor using custom text | an internal hyperlink using arbitrary text |
----
<<anchor_name,Anchor Text>>
----
| A block image | The image with alt text |
----
image::sunset.jpg[Alt Text]
----
(put the image in the src/site/resources/images directory)
| An inline image | The image with alt text, as part of the text flow |
----
image:sunset.jpg [Alt Text]
----
(only one colon)
| Link to a remote image | show an image hosted elsewhere |
----
image::http://inkscape.org/doc/examples/tux.svg[Tux,250,350]
----
(or `image:`)
| Add dimensions or a URL to the image | depends | inside the brackets after the alt text, specify width, height and/or link="http://my_link.com"
| A footnote | subscript link which takes you to the footnote |
----
Some text.footnote:[The footnote text.]
----
| A note or warning with no title | The admonition image followed by the admonition |
----
NOTE:My note here
----
----
WARNING:My warning here
----
| A complex note | The note has a title and/or multiple paragraphs and/or code blocks or lists, etc |
........
.The Title
[NOTE]
====
Here is the note text. Everything until the second set of four equals signs is part of the note.
----
some source code
----
====
........
| Bullet lists | bullet lists |
----
* list item 1
----
(see http://asciidoctor.org/docs/user-manual/#unordered-lists)
| Numbered lists | numbered list |
----
. list item 2
----
(see http://asciidoctor.org/docs/user-manual/#ordered-lists)
| Checklists | Checked or unchecked boxes |
Checked:
----
- [*]
----
Unchecked:
----
- [ ]
----
| Multiple levels of lists | bulleted or numbered or combo |
----
. Numbered (1), at top level
* Bullet (2), nested under 1
* Bullet (3), nested under 1
. Numbered (4), at top level
* Bullet (5), nested under 4
** Bullet (6), nested under 5
- [x] Checked (7), at top level
----
| Labelled lists / variablelists | a list item title or summary followed by content |
----
Title:: content
Title::
content
----
| Sidebars, quotes, or other blocks of text
| a block of text, formatted differently from the default
| Delimited using different delimiters,
see http://asciidoctor.org/docs/user-manual/#built-in-blocks-summary.
Some of the examples above use delimiters like \...., ----,====.
........
[example]
====
This is an example block.
====
[source]
----
This is a source block.
----
[note]
====
This is a note block.
====
[quote]
____
This is a quote block.
____
........
If you want to insert literal Asciidoc content that keeps being interpreted, when in doubt, use eight dots as the delimiter at the top and bottom.
| Nested Sections | chapter, section, sub-section, etc |
----
= Book (or chapter if the chapter can be built alone, see the leveloffset info below)
== Chapter (or section if the chapter is standalone)
=== Section (or subsection, etc)
==== Subsection
----
and so on up to 6 levels (think carefully about going deeper than 4 levels, maybe you can just titled paragraphs or lists instead). Note that you can include a book inside another book by adding the `:leveloffset:+1` macro directive directly before your include, and resetting it to 0 directly after. See the _book.adoc_ source for examples, as this is how this guide handles chapters. *Don't do it for prefaces, glossaries, appendixes, or other special types of chapters.*
| Include one file from another | Content is included as though it were inline |
----
include::[/path/to/file.adoc]
----
For plenty of examples. see _book.adoc_.
| A table | a table | See http://asciidoctor.org/docs/user-manual/#tables. Generally rows are separated by newlines and columns by pipes
| Comment out a single line | A line is skipped during rendering |
`+//+ This line won't show up`
| Comment out a block | A section of the file is skipped during rendering |
----
////
Nothing between the slashes will show up.
////
----
| Highlight text for review | text shows up with yellow background |
----
Test between #hash marks# is highlighted yellow.
----
|===
=== Auto-Generated Content
Some parts of the HBase Reference Guide, most notably <<config.files,config.files>>,
are generated automatically, so that this area of the documentation stays in
sync with the code. This is done by means of an XSLT transform, which you can examine
in the source at _src/main/xslt/configuration_to_asciidoc_chapter.xsl_. This
transforms the _hbase-common/src/main/resources/hbase-default.xml_ file into an
Asciidoc output which can be included in the Reference Guide.
Sometimes, it is necessary to add configuration parameters or modify their descriptions.
Make the modifications to the source file, and they will be included in the
Reference Guide when it is rebuilt.
It is possible that other types of content can and will be automatically generated
from HBase source files in the future.
=== Images in the HBase Reference Guide
You can include images in the HBase Reference Guide. It is important to include
an image title if possible, and alternate text always. This allows screen readers
to navigate to the image and also provides alternative text for the image.
The following is an example of an image with a title and alternate text. Notice
the double colon.
[source,asciidoc]
----
.My Image Title
image::sunset.jpg[Alt Text]
----
Here is an example of an inline image with alternate text. Notice the single colon.
Inline images cannot have titles. They are generally small images like GUI buttons.
[source,asciidoc]
----
image:sunset.jpg[Alt Text]
----
When doing a local build, save the image to the _src/site/resources/images/_ directory.
When you link to the image, do not include the directory portion of the path.
The image will be copied to the appropriate target location during the build of the output.
When you submit a patch which includes adding an image to the HBase Reference Guide,
attach the image to the JIRA. If the committer asks where the image should be
committed, it should go into the above directory.
=== Adding a New Chapter to the HBase Reference Guide
If you want to add a new chapter to the HBase Reference Guide, the easiest way
is to copy an existing chapter file, rename it, and change the ID (in double
brackets) and title. Chapters are located in the _src/main/asciidoc/_chapters/_
directory.
Delete the existing content and create the new content. Then open the
_src/main/asciidoc/book.adoc_ file, which is the main file for the HBase Reference
Guide, and copy an existing `include` element to include your new chapter in the
appropriate location. Be sure to add your new file to your Git repository before
creating your patch.
When in doubt, check to see how other files have been included.
=== Common Documentation Issues
The following documentation issues come up often. Some of these are preferences,
but others can create mysterious build errors or other problems.
[qanda]
Isolate Changes for Easy Diff Review.::
Be careful with pretty-printing or re-formatting an entire XML file, even if
the formatting has degraded over time. If you need to reformat a file, do that
in a separate JIRA where you do not change any content. Be careful because some
XML editors do a bulk-reformat when you open a new file, especially if you use
GUI mode in the editor.
Syntax Highlighting::
The HBase Reference Guide uses `coderay` for syntax highlighting. To enable
syntax highlighting for a given code listing, use the following type of syntax:
+
........
[source,xml]
----
<name>My Name</name>
----
........
+
Several syntax types are supported. The most interesting ones for the HBase
Reference Guide are `java`, `xml`, `sql`, and `bash`.

View File

@ -1,714 +0,0 @@
////
/**
*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
////
[appendix]
== Known Incompatibilities Among HBase Versions
:doctype: book
:numbered:
:toc: left
:icons: font
:experimental:
:toc: left
:source-language: java
== HBase 2.0 Incompatible Changes
This appendix describes incompatible changes from earlier versions of HBase against HBase 2.0.
This list is not meant to be wholly encompassing of all possible incompatibilities.
Instead, this content is intended to give insight into some obvious incompatibilities which most
users will face coming from HBase 1.x releases.
=== List of Major Changes for HBase 2.0
* HBASE-1912- HBCK is a HBase database checking tool for capturing the inconsistency. As an HBase administrator, you should not use HBase version 1.0 hbck tool to check the HBase 2.0 database. Doing so will break the database and throw an exception error.
* HBASE-16189 and HBASE-18945- You cannot open the HBase 2.0 hfiles through HBase 1.0 version. If you are an admin or an HBase user who is using HBase version 1.x, you must first do a rolling upgrade to the latest version of HBase 1.x and then upgrade to HBase 2.0.
* HBASE-18240 - Changed the ReplicationEndpoint Interface. It also introduces a new hbase-third party 1.0 that packages all the third party utilities, which are expected to run in the hbase cluster.
=== Coprocessor API changes
* HBASE-16769 - Deprecated PB references from MasterObserver and RegionServerObserver.
* HBASE-17312 - [JDK8] Use default method for Observer Coprocessors. The interface classes of BaseMasterAndRegionObserver, BaseMasterObserver, BaseRegionObserver, BaseRegionServerObserver and BaseWALObserver uses JDK8's 'default' keyword to provide empty and no-op implementations.
* Interface HTableInterface
HBase 2.0 introduces following changes to the methods listed below:
==== [] interface CoprocessorEnvironment changes (2)
[cols="1,1", frame="all"]
|===
| Change | Result
| Abstract method getTable ( TableName ) has been removed. | A client program may be interrupted by NoSuchMethodError exception.
| Abstract method getTable ( TableName, ExecutorService ) has been removed. | A client program may be interrupted by NoSuchMethodError exception.
|===
* Public Audience
The following tables describes the coprocessor changes.
===== [] class CoprocessorRpcChannel (1)
[cols="1,1", frame="all"]
|===
| Change | Result
| This class has become interface.| A client program may be interrupted by IncompatibleClassChangeError or InstantiationError exception depending on the usage of this class.
|===
===== Class CoprocessorHost<E>
Classes that were Audience Private but were removed.
[cols="1,1", frame="all"]
|===
| Change | Result
| Type of field coprocessors has been changed from java.util.SortedSet<E> to org.apache.hadoop.hbase.util.SortedList<E>.| A client program may be interrupted by NoSuchFieldError exception.
|===
==== MasterObserver
HBase 2.0 introduces following changes to the MasterObserver interface.
===== [] interface MasterObserver (14)
[cols="1,1", frame="all"]
|===
| Change | Result
| Abstract method voidpostCloneSnapshot ( ObserverContext<MasterCoprocessorEnvironment>, HBaseProtos.SnapshotDescription, HTableDescriptor ) has been removed from this interface.| A client program may be interrupted by NoSuchMethodError exception.
| Abstract method voidpostCreateTable ( ObserverContext<MasterCoprocessorEnvironment>, HTableDescriptor, HRegionInfo[ ] ) has been removed from this interface.| A client program may be interrupted by NoSuchMethodErrorexception.
| Abstract method voidpostDeleteSnapshot (ObserverContext<MasterCoprocessorEnvironment>, HBaseProtos.SnapshotDescription ) has been removed from this interface.| A client program may be interrupted by NoSuchMethodErrorexception.
| Abstract method voidpostGetTableDescriptors ( ObserverContext<MasterCoprocessorEnvironment>, List<HTableDescriptor> ) has been removed from this interface.| A client program may be interrupted by NoSuchMethodErrorexception.
| Abstract method voidpostModifyTable ( ObserverContext<MasterCoprocessorEnvironment>, TableName, HTableDescriptor ) has been removed from this interface.| A client program may be interrupted by NoSuchMethodErrorexception.
| Abstract method voidpostRestoreSnapshot ( ObserverContext<MasterCoprocessorEnvironment>, HBaseProtos.SnapshotDescription, HTableDescriptor ) has been removed from this interface.| A client program may be interrupted by NoSuchMethodErrorexception.
| Abstract method voidpostSnapshot ( ObserverContext<MasterCoprocessorEnvironment>, HBaseProtos.SnapshotDescription, HTableDescriptor ) has been removed from this interface.| A client program may be interrupted by NoSuchMethodErrorexception.
| Abstract method voidpreCloneSnapshot ( ObserverContext<MasterCoprocessorEnvironment>, HBaseProtos.SnapshotDescription, HTableDescriptor ) has been removed from this interface.| A client program may be interrupted by NoSuchMethodErrorexception.
| Abstract method voidpreCreateTable ( ObserverContext<MasterCoprocessorEnvironment>, HTableDescriptor, HRegionInfo[ ] ) has been removed from this interface.| A client program may be interrupted by NoSuchMethodErrorexception.
| Abstract method voidpreDeleteSnapshot ( ObserverContext<MasterCoprocessorEnvironment>, HBaseProtos.SnapshotDescription ) has been removed from this interface.| A client program may be interrupted by NoSuchMethodErrorexception.
| Abstract method voidpreGetTableDescriptors ( ObserverContext<MasterCoprocessorEnvironment>, List<TableName>, List<HTableDescriptor> ) has been removed from this interface.| A client program may be interrupted by NoSuchMethodErrorexception.
| Abstract method voidpreModifyTable ( ObserverContext<MasterCoprocessorEnvironment>, TableName, HTableDescriptor ) has been removed from this interface.| A client program may be interrupted by NoSuchMethodErrorexception.
| Abstract method voidpreRestoreSnapshot ( ObserverContext<MasterCoprocessorEnvironment>, HBaseProtos.SnapshotDescription, HTableDescriptor ) has been removed from this interface.| A client program may be interrupted by NoSuchMethodErrorexception.
| Abstract method voidpreSnapshot ( ObserverContext<MasterCoprocessorEnvironment>, HBaseProtos.SnapshotDescription, HTableDescriptor ) has been removed from this interface.| A client program may be interrupted by NoSuchMethodErrorexception.
|===
==== RegionObserver
HBase 2.0 introduces following changes to the RegionObserver interface.
===== [] interface RegionObserver (13)
[cols="1,1", frame="all"]
|===
| Change | Result
| Abstract method voidpostCloseRegionOperation ( ObserverContext<RegionCoprocessorEnvironment>, HRegion.Operation ) has been removed from this interface.| A client program may be interrupted by NoSuchMethodError exception.
| Abstract method voidpostCompactSelection ( ObserverContext<RegionCoprocessorEnvironment>, Store, ImmutableList<StoreFile> ) has been removed from this interface.| A client program may be interrupted by NoSuchMethodError exception.
| Abstract method voidpostCompactSelection ( ObserverContext<RegionCoprocessorEnvironment>, Store, ImmutableList<StoreFile>, CompactionRequest ) has been removed from this interface.| A client program may be interrupted by NoSuchMethodError exception.
| Abstract method voidpostGetClosestRowBefore ( ObserverContext<RegionCoprocessorEnvironment>, byte[ ], byte[ ], Result ) has been removed from this interface.| A client program may be interrupted by NoSuchMethodError exception.
| Abstract method DeleteTrackerpostInstantiateDeleteTracker ( ObserverContext<RegionCoprocessorEnvironment>, DeleteTracker ) has been removed from this interface.| A client program may be interrupted by NoSuchMethodError exception.
| Abstract method voidpostSplit ( ObserverContext<RegionCoprocessorEnvironment>, HRegion, HRegion ) has been removed from this interface.| A client program may be interrupted by NoSuchMethodError exception.
| Abstract method voidpostStartRegionOperation ( ObserverContext<RegionCoprocessorEnvironment>, HRegion.Operation ) has been removed from this interface.| A client program may be interrupted by NoSuchMethodError exception.
| Abstract method StoreFile.ReaderpostStoreFileReaderOpen ( ObserverContext<RegionCoprocessorEnvironment>, FileSystem, Path, FSDataInputStreamWrapper, long, CacheConfig, Reference, StoreFile.Reader ) has been removed from this interface.| A client program may be interrupted by NoSuchMethodError exception.
| Abstract method voidpostWALRestore ( ObserverContext<RegionCoprocessorEnvironment>, HRegionInfo, HLogKey, WALEdit ) has been removed from this interface.| A client program may be interrupted by NoSuchMethodError exception.
| Abstract method InternalScannerpreFlushScannerOpen ( ObserverContext<RegionCoprocessorEnvironment>, Store, KeyValueScanner, InternalScanner ) has been removed from this interface.| A client program may be interrupted by NoSuchMethodError exception.
| Abstract method voidpreGetClosestRowBefore ( ObserverContext<RegionCoprocessorEnvironment>, byte[ ], byte[ ], Result ) has been removed from this interface.| A client program may be interrupted by NoSuchMethodError exception.
| Abstract method StoreFile.ReaderpreStoreFileReaderOpen ( ObserverContext<RegionCoprocessorEnvironment>, FileSystem, Path, FSDataInputStreamWrapper, long, CacheConfig, Reference, StoreFile.Reader ) has been removed from this interface.| A client program may be interrupted by NoSuchMethodError exception.
| Abstract method voidpreWALRestore ( ObserverContext<RegionCoprocessorEnvironment>, HRegionInfo, HLogKey, WALEdit ) has been removed from this interface.| A client program may be interrupted by NoSuchMethodError exception.
|===
==== WALObserver
HBase 2.0 introduces following changes to the WALObserver interface.
===== [] interface WALObserver
[cols="1,1", frame="all"]
|===
| Change | Result
| Abstract method voidpostWALWrite ( ObserverContext<WALCoprocessorEnvironment>, HRegionInfo, HLogKey, WALEdit ) has been removed from this interface.| A client program may be interrupted by NoSuchMethodError exception.
| Abstract method booleanpreWALWrite ( ObserverContext<WALCoprocessorEnvironment>, HRegionInfo, HLogKey, WALEdit ) has been removed from this interface.| A client program may be interrupted by NoSuchMethodError exception.
|===
==== Miscellaneous
HBase 2.0 introduces changes to the following classes:
hbase-server-1.0.0.jar, OnlineRegions.class package org.apache.hadoop.hbase.regionserver
[cols="1,1", frame="all"]
===== [] OnlineRegions.getFromOnlineRegions ( String p1 ) [abstract] : HRegion
org/apache/hadoop/hbase/regionserver/OnlineRegions.getFromOnlineRegions:(Ljava/lang/String;)Lorg/apache/hadoop/hbase/regionserver/HRegion;
[cols="1,1", frame="all"]
|===
| Change | Result
| Return value type has been changed from Region to Region.| This method has been removed because the return type is part of the method signature. A client program may be interrupted by NoSuchMethodError exception.
|===
hbase-server-1.0.0.jar, RegionCoprocessorEnvironment.class package org.apache.hadoop.hbase.coprocessor
===== [] RegionCoprocessorEnvironment.getRegion ( ) [abstract] : HRegion
org/apache/hadoop/hbase/coprocessor/RegionCoprocessorEnvironment.getRegion:()Lorg/apache/hadoop/hbase/regionserver/HRegion;
[cols="1,1", frame="all"]
|===
| Change | Result
| Return value type has been changed from org.apache.hadoop.hbase.regionserver.HRegion to org.apache.hadoop.hbase.regionserver.Region.| This method has been removed because the return type is part of the method signature. A client program may be interrupted by NoSuchMethodError exception.
|===
hbase-server-1.0.0.jar, RegionCoprocessorHost.class package org.apache.hadoop.hbase.regionserver
===== [] RegionCoprocessorHost.postAppend ( Append append, Result result ) : void
org/apache/hadoop/hbase/regionserver/RegionCoprocessorHost.postAppend:(Lorg/apache/hadoop/hbase/client/Append;Lorg/apache/hadoop/hbase/client/Result;)V
[cols="1,1", frame="all"]
|===
| Change | Result
| Return value type has been changed from void to org.apache.hadoop.hbase.client.Result.| This method has been removed because the return type is part of the method signature. A client program may be interrupted by NoSuchMethodError exception.
|===
===== [] RegionCoprocessorHost.preStoreFileReaderOpen ( FileSystem fs, Path p, FSDataInputStreamWrapper in, long size,CacheConfig cacheConf, Reference r ) : StoreFile.Reader
org/apache/hadoop/hbase/regionserver/RegionCoprocessorHost.preStoreFileReaderOpen:(Lorg/apache/hadoop/fs/FileSystem;Lorg/apache/hadoop/fs/Path;Lorg/apache/hadoop/hbase/io/FSDataInputStreamWrapper;JLorg/apache/hadoop/hbase/io/hfile/CacheConfig;Lorg/apache/hadoop/hbase/io/Reference;)Lorg/apache/hadoop/hbase/regionserver/StoreFile$Reader;
[cols="1,1", frame="all"]
|===
| Change | Result
| Return value type has been changed from StoreFile.Reader to StoreFileReader.| This method has been removed because the return type is part of the method signature. A client program may be interrupted by NoSuchMethodError exception.
|===
==== IPC
==== Scheduler changes:
1. Following methods became abstract:
package org.apache.hadoop.hbase.ipc
===== []class RpcScheduler (1)
[cols="1,1", frame="all"]
|===
| Change | Result
| Abstract method void dispatch ( CallRunner ) has been removed from this class.| A client program may be interrupted by NoSuchMethodError exception.
|===
hbase-server-1.0.0.jar, RpcScheduler.class package org.apache.hadoop.hbase.ipc
===== [] RpcScheduler.dispatch ( CallRunner p1 ) [abstract] : void 1
org/apache/hadoop/hbase/ipc/RpcScheduler.dispatch:(Lorg/apache/hadoop/hbase/ipc/CallRunner;)V
[cols="1,1", frame="all"]
|===
| Change | Result
| Return value type has been changed from void to boolean.| This method has been removed because the return type is part of the method signature. A client program may be interrupted by NoSuchMethodError exception.
|===
1. Following abstract methods have been removed:
===== []interface PriorityFunction (2)
[cols="1,1", frame="all"]
|===
| Change | Result
| Abstract method longgetDeadline ( RPCProtos.RequestHeader, Message ) has been removed from this interface.| A client program may be interrupted by NoSuchMethodError exception.
| Abstract method int getPriority ( RPCProtos.RequestHeader, Message ) has been removed from this interface.| A client program may be interrupted by NoSuchMethodError exception.
|===
==== Server API changes:
===== [] class RpcServer (12)
[cols="1,1", frame="all"]
|===
| Change | Result
| Type of field CurCall has been changed from java.lang.ThreadLocal<RpcServer.Call> to java.lang.ThreadLocal<RpcCall>.| A client program may be interrupted by NoSuchFieldError exception.
| This class became abstract.| A client program may be interrupted by InstantiationError exception.
| Abstract method int getNumOpenConnections ( ) has been added to this class.| This class became abstract and a client program may be interrupted by InstantiationError exception.
| Field callQueueSize of type org.apache.hadoop.hbase.util.Counter has been removed from this class.| A client program may be interrupted by NoSuchFieldError exception.
| Field connectionList of type java.util.List<RpcServer.Connection> has been removed from this class.| A client program may be interrupted by NoSuchFieldError exception.
| Field maxIdleTime of type int has been removed from this class.| A client program may be interrupted by NoSuchFieldError exception.
| Field numConnections of type int has been removed from this class.| A client program may be interrupted by NoSuchFieldError exception.
| Field port of type int has been removed from this class.| A client program may be interrupted by NoSuchFieldError exception.
| Field purgeTimeout of type long has been removed from this class.| A client program may be interrupted by NoSuchFieldError exception.
| Field responder of type RpcServer.Responder has been removed from this class.| A client program may be interrupted by NoSuchFieldError exception.
| Field socketSendBufferSize of type int has been removed from this class.| A client program may be interrupted by NoSuchFieldError exception.
| Field thresholdIdleConnections of type int has been removed from this class.| A client program may be interrupted by NoSuchFieldError exception.
|===
Following abstract method has been removed:
[cols="1,1", frame="all"]
|===
| Change | Result
| Abstract method Pair<Message,CellScanner>call ( BlockingService, Descriptors.MethodDescriptor, Message, CellScanner, long, MonitoredRPCHandler ) has been removed from this interface.| A client program may be interrupted by NoSuchMethodError exception.
|===
==== Replication and WAL changes:
HBASE-18733: WALKey has been purged completely in HBase 2.0.
Following are the changes to the WALKey:
===== [] classWALKey (8)
[cols="1,1", frame="all"]
|===
| Change | Result
| Access level of field clusterIds has been changed from protected to private.| A client program may be interrupted by IllegalAccessError exception.
| Access level of field compressionContext has been changed from protected to private.| A client program may be interrupted by IllegalAccessError exception.
| Access level of field encodedRegionName has been changed from protected to private.| A client program may be interrupted by IllegalAccessError exception.
| Access level of field tablename has been changed from protectedto private.| A client program may be interrupted by IllegalAccessError exception.
| Access level of field writeTime has been changed from protectedto private.| A client program may be interrupted by IllegalAccessError exception.
|===
Following fields have been removed:
[cols="1,1", frame="all"]
|===
| Change | Result
| Field LOG of type org.apache.commons.logging.Log has been removed from this class.| A client program may be interrupted by NoSuchFieldError exception.
| Field VERSION of type WALKey.Version has been removed from this class.| A client program may be interrupted by NoSuchFieldError exception.
| Field logSeqNum of type long has been removed from this class.| A client program may be interrupted by NoSuchFieldError exception.
|===
Following are the changes to the WALEdit.class:
hbase-server-1.0.0.jar, WALEdit.class package org.apache.hadoop.hbase.regionserver.wal
===== WALEdit.getCompaction ( Cell kv ) [static] : WALProtos.CompactionDescriptor (1)
org/apache/hadoop/hbase/regionserver/wal/WALEdit.getCompaction:(Lorg/apache/hadoop/hbase/Cell;)Lorg/apache/hadoop/hbase/protobuf/generated/WALProtos$CompactionDescriptor;
[cols="1,1", frame="all"]
|===
| Change | Result
| Return value type has been changed from org.apache.hadoop.hbase.protobuf.generated.WALProtos.CompactionDescriptor to org.apache.hadoop.hbase.shaded.protobuf.generated.WALProtos.CompactionDescriptor.| This method has been removed because the return type is part of the method signature. A client program may be interrupted by NoSuchMethodError exception.
|===
===== WALEdit.getFlushDescriptor ( Cell cell ) [static] : WALProtos.FlushDescriptor (1)
org/apache/hadoop/hbase/regionserver/wal/WALEdit.getFlushDescriptor:(Lorg/apache/hadoop/hbase/Cell;)Lorg/apache/hadoop/hbase/protobuf/generated/WALProtos$FlushDescriptor;
[cols="1,1", frame="all"]
|===
| Change | Result
| Return value type has been changed from org.apache.hadoop.hbase.protobuf.generated.WALProtos.FlushDescriptor to org.apache.hadoop.hbase.shaded.protobuf.generated.WALProtos.FlushDescriptor.| This method has been removed because the return type is part of the method signature. A client program may be interrupted by NoSuchMethodError exception.
|===
===== WALEdit.getRegionEventDescriptor ( Cell cell ) [static] : WALProtos.RegionEventDescriptor (1)
org/apache/hadoop/hbase/regionserver/wal/WALEdit.getRegionEventDescriptor:(Lorg/apache/hadoop/hbase/Cell;)Lorg/apache/hadoop/hbase/protobuf/generated/WALProtos$RegionEventDescriptor;
[cols="1,1", frame="all"]
|===
| Change | Result
| Return value type has been changed from org.apache.hadoop.hbase.protobuf.generated.WALProtos.RegionEventDescriptor to org.apache.hadoop.hbase.shaded.protobuf.generated.WALProtos.RegionEventDescriptor.| This method has been removed because the return type is part of the method signature. A client program may be interrupted by NoSuchMethodError exception.
|===
Following is the change to the WALKey.class:
package org.apache.hadoop.hbase.wal
===== WALKey.getBuilder ( WALCellCodec.ByteStringCompressor compressor ) : WALProtos.WALKey.Builder 1
org/apache/hadoop/hbase/wal/WALKey.getBuilder:(Lorg/apache/hadoop/hbase/regionserver/wal/WALCellCodec$ByteStringCompressor;)Lorg/apache/hadoop/hbase/protobuf/generated/WALProtos$WALKey$Builder;
[cols="1,1", frame="all"]
|===
| Change | Result
| Return value type has been changed from org.apache.hadoop.hbase.protobuf.generated.WALProtos.WALKey.Builder to org.apache.hadoop.hbase.shaded.protobuf.generated.WALProtos.WALKey.Builder.| This method has been removed because the return type is part of the method signature. A client program may be interrupted by NoSuchMethodError exception.
|===
==== Deprecated APIs or coprocessor:
HBASE-16769 - PB references from MasterObserver and RegionServerObserver has been removed.
==== Admin Interface API changes:
You cannot administer an HBase 2.0 cluster with an HBase 1.0 client that includes RelicationAdmin, ACC, Thrift and REST usage of Admin ops. Methods returning protobufs have been changed to return POJOs instead. pb is not used in the APIs anymore. Returns have changed from void to Future for async methods.
HBASE-18106 - Admin.listProcedures and Admin.listLocks were renamed to getProcedures and getLocks.
MapReduce makes use of Admin doing following admin.getClusterStatus() to calcluate Splits.
* Thrift usage of Admin API:
compact(ByteBuffer)
createTable(ByteBuffer, List<ColumnDescriptor>)
deleteTable(ByteBuffer)
disableTable(ByteBuffer)
enableTable(ByteBuffer)
getTableNames()
majorCompact(ByteBuffer)
* REST usage of Admin API:
hbase-rest
org.apache.hadoop.hbase.rest
RootResource
getTableList()
TableName[] tableNames = servlet.getAdmin().listTableNames();
SchemaResource
delete(UriInfo)
Admin admin = servlet.getAdmin();
update(TableSchemaModel, boolean, UriInfo)
Admin admin = servlet.getAdmin();
StorageClusterStatusResource
get(UriInfo)
ClusterStatus status = servlet.getAdmin().getClusterStatus();
StorageClusterVersionResource
get(UriInfo)
model.setVersion(servlet.getAdmin().getClusterStatus().getHBaseVersion());
TableResource
exists()
return servlet.getAdmin().tableExists(TableName.valueOf(table));
Following are the changes to the Admin interface:
===== [] interface Admin (9)
[cols="1,1", frame="all"]
|===
| Change | Result
| Abstract method createTableAsync ( HTableDescriptor, byte[ ][ ] ) has been removed from this interface.| A client program may be interrupted by NoSuchMethodError exception.
| Abstract method disableTableAsync ( TableName ) has been removed from this interface.| A client program may be interrupted by NoSuchMethodError exception.
| Abstract method enableTableAsync ( TableName ) has been removed from this interface.| A client program may be interrupted by NoSuchMethodError exception.
| Abstract method getCompactionState ( TableName ) has been removed from this interface.| A client program may be interrupted by NoSuchMethodError exception.
| Abstract method getCompactionStateForRegion ( byte[ ] ) has been removed from this interface.| A client program may be interrupted by NoSuchMethodError exception.
| Abstract method isSnapshotFinished ( HBaseProtos.SnapshotDescription ) has been removed from this interface.| A client program may be interrupted by NoSuchMethodError exception.
| Abstract method snapshot ( String, TableName, HBaseProtos.SnapshotDescription.Type ) has been removed from this interface.| A client program may be interrupted by NoSuchMethodError exception.
| Abstract method snapshot ( HBaseProtos.SnapshotDescription ) has been removed from this interface.| A client program may be interrupted by NoSuchMethodError exception.
| Abstract method takeSnapshotAsync ( HBaseProtos.SnapshotDescription ) has been removed from this interface.| A client program may be interrupted by NoSuchMethodError exception.
|===
Following are the changes to the Admin.class:
hbase-client-1.0.0.jar, Admin.class package org.apache.hadoop.hbase.client
===== [] Admin.createTableAsync ( HTableDescriptor p1, byte[ ][ ] p2 ) [abstract] : void 1
org/apache/hadoop/hbase/client/Admin.createTableAsync:(Lorg/apache/hadoop/hbase/HTableDescriptor;[[B)V
[cols="1,1", frame="all"]
|===
| Change | Result
| Return value type has been changed from void to java.util.concurrent.Future<java.lang.Void>.| This method has been removed because the return type is part of the method signature. A client program may be interrupted by NoSuchMethodError exception.
|===
===== [] Admin.disableTableAsync ( TableName p1 ) [abstract] : void 1
org/apache/hadoop/hbase/client/Admin.disableTableAsync:(Lorg/apache/hadoop/hbase/TableName;)V
[cols="1,1", frame="all"]
|===
| Change | Result
| Return value type has been changed from void to java.util.concurrent.Future<java.lang.Void>.| This method has been removed because the return type is part of the method signature. A client program may be interrupted by NoSuchMethodError exception.
|===
===== Admin.enableTableAsync ( TableName p1 ) [abstract] : void 1
org/apache/hadoop/hbase/client/Admin.enableTableAsync:(Lorg/apache/hadoop/hbase/TableName;)V
[cols="1,1", frame="all"]
|===
| Change | Result
| Return value type has been changed from void to java.util.concurrent.Future<java.lang.Void>.| This method has been removed because the return type is part of the method signature. A client program may be interrupted by NoSuchMethodError exception.
|===
===== [] Admin.getCompactionState ( TableName p1 ) [abstract] : AdminProtos.GetRegionInfoResponse.CompactionState 1
org/apache/hadoop/hbase/client/Admin.getCompactionState:(Lorg/apache/hadoop/hbase/TableName;)Lorg/apache/hadoop/hbase/protobuf/generated/AdminProtos$GetRegionInfoResponse$CompactionState;
[cols="1,1", frame="all"]
|===
| Change | Result
| Return value type has been changed from org.apache.hadoop.hbase.protobuf.generated.AdminProtos.GetRegionInfoResponse.CompactionState to CompactionState.| This method has been removed because the return type is part of the method signature. A client program may be interrupted by NoSuchMethodError exception.
|===
===== [] Admin.getCompactionStateForRegion ( byte[ ] p1 ) [abstract] : AdminProtos.GetRegionInfoResponse.CompactionState 1
org/apache/hadoop/hbase/client/Admin.getCompactionStateForRegion:([B)Lorg/apache/hadoop/hbase/protobuf/generated/AdminProtos$GetRegionInfoResponse$CompactionState;
[cols="1,1", frame="all"]
|===
| Change | Result
| Return value type has been changed from org.apache.hadoop.hbase.protobuf.generated.AdminProtos.GetRegionInfoResponse.CompactionState to CompactionState.| This method has been removed because the return type is part of the method signature. A client program may be interrupted by NoSuchMethodError exception.
|===
==== HTableDescriptor and HColumnDescriptor changes
HTableDescriptor and HColumnDescriptor has become interfaces and you can create it through Builders. HCD has become CFD. It no longer implements writable interface.
package org.apache.hadoop.hbase
===== [] class HColumnDescriptor (1)
[cols="1,1", frame="all"]
|===
| Change | Result
| Removed super-interface org.apache.hadoop.io.WritableComparable<HColumnDescriptor>.| A client program may be interrupted by NoSuchMethodError exception.
|===
HColumnDescriptor in 1.0.0
{code}
@InterfaceAudience.Public
@InterfaceStability.Evolving
public class HColumnDescriptor implements WritableComparable<HColumnDescriptor> {
{code}
HColumnDescriptor in 2.0
{code}
@InterfaceAudience.Public
@Deprecated // remove it in 3.0
public class HColumnDescriptor implements ColumnFamilyDescriptor, Comparable<HColumnDescriptor> {
{code}
For META_TABLEDESC, the maker method had been deprecated already in HTD in 1.0.0. OWNER_KEY is still in HTD.
===== class HTableDescriptor (3)
[cols="1,1", frame="all"]
|===
| Change | Result
| Removed super-interface org.apache.hadoop.io.WritableComparable<HTableDescriptor>.| A client program may be interrupted by NoSuchMethodError exception.
| Field META_TABLEDESC of type HTableDescriptor has been removed from this class.| A client program may be interrupted by NoSuchFieldError exception.
|===
hbase-client-1.0.0.jar, HTableDescriptor.class package org.apache.hadoop.hbase
===== [] HTableDescriptor.getColumnFamilies ( ) : HColumnDescriptor[ ] (1)
org/apache/hadoop/hbase/HTableDescriptor.getColumnFamilies:()[Lorg/apache/hadoop/hbase/HColumnDescriptor;
===== [] class HColumnDescriptor (1)
[cols="1,1", frame="all"]
|===
| Change | Result
| Return value type has been changed from HColumnDescriptor[]to client.ColumnFamilyDescriptor[].| This method has been removed because the return type is part of the method signature. A client program may be interrupted by NoSuchMethodError exception.
|===
===== [] HTableDescriptor.getCoprocessors ( ) : List<String> (1)
org/apache/hadoop/hbase/HTableDescriptor.getCoprocessors:()Ljava/util/List;
[cols="1,1", frame="all"]
|===
| Change | Result
| Return value type has been changed from java.util.List<java.lang.String> to java.util.Collection.| This method has been removed because the return type is part of the method signature. A client program may be interrupted by NoSuchMethodError exception.
|===
* HBASE-12990 MetaScanner is removed and it is replaced by MetaTableAccessor.
===== HTableWrapper changes:
hbase-server-1.0.0.jar, HTableWrapper.class package org.apache.hadoop.hbase.client
===== [] HTableWrapper.createWrapper ( List<HTableInterface> openTables, TableName tableName, CoprocessorHost.Environment env, ExecutorService pool ) [static] : HTableInterface 1
org/apache/hadoop/hbase/client/HTableWrapper.createWrapper:(Ljava/util/List;Lorg/apache/hadoop/hbase/TableName;Lorg/apache/hadoop/hbase/coprocessor/CoprocessorHost$Environment;Ljava/util/concurrent/ExecutorService;)Lorg/apache/hadoop/hbase/client/HTableInterface;
[cols="1,1", frame="all"]
|===
| Change | Result
| Return value type has been changed from HTableInterface to Table.| This method has been removed because the return type is part of the method signature. A client program may be interrupted by NoSuchMethodError exception.
|===
* HBASE-12586: Delete all public HTable constructors and delete ConnectionManager#{delete,get}Connection.
* HBASE-9117: Remove HTablePool and all HConnection pooling related APIs.
* HBASE-13214: Remove deprecated and unused methods from HTable class
Following are the changes to the Table interface:
===== [] interface Table (4)
[cols="1,1", frame="all"]
|===
| Change | Result
| Abstract method batch ( List<?> ) has been removed from this interface.| A client program may be interrupted by NoSuchMethodError exception.
| Abstract method batchCallback ( List<?>, Batch.Callback<R> )has been removed from this interface.| A client program may be interrupted by NoSuchMethodError exception.
| Abstract method getWriteBufferSize ( ) has been removed from this interface.| A client program may be interrupted by NoSuchMethodError exception.
| Abstract method setWriteBufferSize ( long ) has been removed from this interface.| A client program may be interrupted by NoSuchMethodError exception.
|===
==== Deprecated buffer methods in Table (in 1.0.1) and removed in 2.0.0
* HBASE-13298- Clarify if Table.{set|get}WriteBufferSize() is deprecated or not.
* LockTimeoutException and OperationConflictException classes have been removed.
==== class OperationConflictException (1)
[cols="1,1", frame="all"]
|===
| Change | Result
| This class has been removed.| A client program may be interrupted by NoClassDefFoundErrorexception.
|===
==== class class LockTimeoutException (1)
[cols="1,1", frame="all"]
|===
| Change | Result
| This class has been removed.| A client program may be interrupted by NoClassDefFoundErrorexception.
|===
==== Filter API changes:
Following methods have been removed:
package org.apache.hadoop.hbase.filter
===== [] class Filter (2)
|===
| Change | Result
| Abstract method getNextKeyHint ( KeyValue ) has been removed from this class.|A client program may be interrupted by NoSuchMethodError exception.
| Abstract method transform ( KeyValue ) has been removed from this class.| A client program may be interrupted by NoSuchMethodError exception.
|===
* HBASE-12296 Filters should work with ByteBufferedCell.
* HConnection is removed in HBase 2.0.
* RegionLoad and ServerLoad internally moved to shaded PB.
===== [] class RegionLoad (1)
[cols="1,1", frame="all"]
|===
| Change | Result
| Type of field regionLoadPB has been changed from protobuf.generated.ClusterStatusProtos.RegionLoad to shaded.protobuf.generated.ClusterStatusProtos.RegionLoad.|A client program may be interrupted by NoSuchFieldError exception.
|===
* HBASE-15783:AccessControlConstants#OP_ATTRIBUTE_ACL_STRATEGY_CELL_FIRST is not used any more.
package org.apache.hadoop.hbase.security.access
===== [] interface AccessControlConstants (3)
[cols="1,1", frame="all"]
|===
| Change | Result
| Field OP_ATTRIBUTE_ACL_STRATEGY of type java.lang.Stringhas been removed from this interface.| A client program may be interrupted by NoSuchFieldError exception.
| Field OP_ATTRIBUTE_ACL_STRATEGY_CELL_FIRST of type byte[] has been removed from this interface.| A client program may be interrupted by NoSuchFieldError exception.
| Field OP_ATTRIBUTE_ACL_STRATEGY_DEFAULT of type byte[] has been removed from this interface.| A client program may be interrupted by NoSuchFieldError exception.
|===
===== ServerLoad returns long instead of int 1
hbase-client-1.0.0.jar, ServerLoad.class package org.apache.hadoop.hbase
===== [] ServerLoad.getNumberOfRequests ( ) : int 1
org/apache/hadoop/hbase/ServerLoad.getNumberOfRequests:()I
[cols="1,1", frame="all"]
|===
| Change | Result
| Return value type has been changed from int to long.| This method has been removed because the return type is part of the method signature. A client program may be interrupted by NoSuchMethodError exception.
|===
===== [] ServerLoad.getReadRequestsCount ( ) : int 1
org/apache/hadoop/hbase/ServerLoad.getReadRequestsCount:()I
[cols="1,1", frame="all"]
|===
| Change | Result
| Return value type has been changed from int to long.| This method has been removed because the return type is part of the method signature. A client program may be interrupted by NoSuchMethodError exception.
|===
===== [] ServerLoad.getTotalNumberOfRequests ( ) : int 1
org/apache/hadoop/hbase/ServerLoad.getTotalNumberOfRequests:()I
[cols="1,1", frame="all"]
|===
| Change | Result
| Return value type has been changed from int to long.|This method has been removed because the return type is part of the method signature. A client program may be interrupted by NoSuchMethodError exception.
|===
===== []ServerLoad.getWriteRequestsCount ( ) : int 1
org/apache/hadoop/hbase/ServerLoad.getWriteRequestsCount:()I
[cols="1,1", frame="all"]
|===
| Change | Result
| Return value type has been changed from int to long.| This method has been removed because the return type is part of the method signature. A client program may be interrupted by NoSuchMethodError exception.
|===
* HBASE-13636 Remove deprecation for HBASE-4072 (Reading of zoo.cfg)
* HConstants are removed. HBASE-16040 Remove configuration "hbase.replication"
===== []class HConstants (6)
[cols="1,1", frame="all"]
|===
| Change | Result
| Field DEFAULT_HBASE_CONFIG_READ_ZOOKEEPER_CONFIG of type boolean has been removed from this class.| A client program may be interrupted by NoSuchFieldError exception.
| Field HBASE_CONFIG_READ_ZOOKEEPER_CONFIG of type java.lang.String has been removed from this class.| A client program may be interrupted by NoSuchFieldError exception.
| Field REPLICATION_ENABLE_DEFAULT of type boolean has been removed from this class.| A client program may be interrupted by NoSuchFieldError exception.
| Field REPLICATION_ENABLE_KEY of type java.lang.String has been removed from this class.| A client program may be interrupted by NoSuchFieldError exception.
| Field ZOOKEEPER_CONFIG_NAME of type java.lang.String has been removed from this class.| A client program may be interrupted by NoSuchFieldError exception.
| Field ZOOKEEPER_USEMULTI of type java.lang.String has been removed from this class.| A client program may be interrupted by NoSuchFieldError exception.
|===
* HBASE-18732: [compat 1-2] HBASE-14047 removed Cell methods without deprecation cycle.
===== []interface Cell 5
[cols="1,1", frame="all"]
|===
| Change | Result
| Abstract method getFamily ( ) has been removed from this interface.| A client program may be interrupted by NoSuchMethodError exception.
| Abstract method getMvccVersion ( ) has been removed from this interface.| A client program may be interrupted by NoSuchMethodError exception.
| Abstract method getQualifier ( ) has been removed from this interface.| A client program may be interrupted by NoSuchMethodError exception.
| Abstract method getRow ( ) has been removed from this interface.| A client program may be interrupted by NoSuchMethodError exception.
| Abstract method getValue ( ) has been removed from this interface.| A client program may be interrupted by NoSuchMethodError exception.
|===
* HBASE-18795:Expose KeyValue.getBuffer() for tests alone. Allows KV#getBuffer in tests only that was deprecated previously.
==== Region scanner changes:
===== []interface RegionScanner (1)
[cols="1,1", frame="all"]
|===
| Change | Result
| Abstract method boolean nextRaw ( List<Cell>, int ) has been removed from this interface.| A client program may be interrupted by NoSuchMethodError exception.
|===
==== StoreFile changes:
===== [] class StoreFile (1)
[cols="1,1", frame="all"]
|===
| Change | Result
| This class became interface.| A client program may be interrupted by IncompatibleClassChangeError or InstantiationError exception dependent on the usage of this class.
|===
==== Mapreduce changes:
HFile*Format has been removed in HBase 2.0.
==== ClusterStatus changes:
HBASE-15843: Replace RegionState.getRegionInTransition() Map with a Set
hbase-client-1.0.0.jar, ClusterStatus.class package org.apache.hadoop.hbase
===== [] ClusterStatus.getRegionsInTransition ( ) : Map<String,RegionState> 1
org/apache/hadoop/hbase/ClusterStatus.getRegionsInTransition:()Ljava/util/Map;
[cols="1,1", frame="all"]
|===
| Change | Result
|Return value type has been changed from java.util.Map<java.lang.String,master.RegionState> to java.util.List<master.RegionState>.| This method has been removed because the return type is part of the method signature. A client program may be interrupted by NoSuchMethodError exception.
|===
Other changes in ClusterStatus include removal of convert methods that were no longer necessary after purge of PB from API.
==== Purge of PBs from API
PBs have been deprecated in APIs in HBase 2.0.
===== [] HBaseSnapshotException.getSnapshotDescription ( ) : HBaseProtos.SnapshotDescription 1
org/apache/hadoop/hbase/snapshot/HBaseSnapshotException.getSnapshotDescription:()Lorg/apache/hadoop/hbase/protobuf/generated/HBaseProtos$SnapshotDescription;
[cols="1,1", frame="all"]
|===
| Change | Result
| Return value type has been changed from org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.SnapshotDescription to org.apache.hadoop.hbase.client.SnapshotDescription.| This method has been removed because the return type is part of the method signature. A client program may be interrupted by NoSuchMethodError exception.
|===
* HBASE-15609: Remove PB references from Result, DoubleColumnInterpreter and any such public facing class for 2.0.
hbase-client-1.0.0.jar, Result.class package org.apache.hadoop.hbase.client
===== [] Result.getStats ( ) : ClientProtos.RegionLoadStats 1
org/apache/hadoop/hbase/client/Result.getStats:()Lorg/apache/hadoop/hbase/protobuf/generated/ClientProtos$RegionLoadStats;
[cols="1,1", frame="all"]
|===
| Change | Result
| Return value type has been changed from org.apache.hadoop.hbase.protobuf.generated.ClientProtos.RegionLoadStats to RegionLoadStats.| This method has been removed because the return type is part of the method signature. A client program may be interrupted by NoSuchMethodError exception.
|===
==== REST changes:
hbase-rest-1.0.0.jar, Client.class package org.apache.hadoop.hbase.rest.client
===== [] Client.getHttpClient ( ) : HttpClient 1
org/apache/hadoop/hbase/rest/client/Client.getHttpClient:()Lorg/apache/commons/httpclient/HttpClient
[cols="1,1", frame="all"]
|===
| Change | Result
| Return value type has been changed from org.apache.commons.httpclient.HttpClient to org.apache.http.client.HttpClient.| This method has been removed because the return type is part of the method signature. A client program may be interrupted by NoSuchMethodError exception.
|===
hbase-rest-1.0.0.jar, Response.class package org.apache.hadoop.hbase.rest.client
===== [] Response.getHeaders ( ) : Header[ ] 1
org/apache/hadoop/hbase/rest/client/Response.getHeaders:()[Lorg/apache/commons/httpclient/Header;
[cols="1,1", frame="all"]
|===
| Change | Result
| Return value type has been changed from org.apache.commons.httpclient.Header[] to org.apache.http.Header[].| This method has been removed because the return type is part of the method signature. A client program may be interrupted by NoSuchMethodError exception.
|===
==== PrettyPrinter changes:
hbase-server-1.0.0.jar, HFilePrettyPrinter.class package org.apache.hadoop.hbase.io.hfile
===== []HFilePrettyPrinter.processFile ( Path file ) : void 1
org/apache/hadoop/hbase/io/hfile/HFilePrettyPrinter.processFile:(Lorg/apache/hadoop/fs/Path;)V
[cols="1,1", frame="all"]
|===
| Change | Result
| Return value type has been changed from void to int.| This method has been removed because the return type is part of the method signature. A client program may be interrupted by NoSuchMethodError exception.
|===
==== AccessControlClient changes:
HBASE-13171 Change AccessControlClient methods to accept connection object to reduce setup time. Parameters have been changed in the following methods:
* hbase-client-1.2.7-SNAPSHOT.jar, AccessControlClient.class
package org.apache.hadoop.hbase.security.access
AccessControlClient.getUserPermissions ( Configuration conf, String tableRegex ) [static] : List<UserPermission> *DEPRECATED*
org/apache/hadoop/hbase/security/access/AccessControlClient.getUserPermissions:(Lorg/apache/hadoop/conf/Configuration;Ljava/lang/String;)Ljava/util/List;
* AccessControlClient.grant ( Configuration conf, String namespace, String userName, Permission.Action... actions )[static] : void *DEPRECATED*
org/apache/hadoop/hbase/security/access/AccessControlClient.grant:(Lorg/apache/hadoop/conf/Configuration;Ljava/lang/String;Ljava/lang/String;[Lorg/apache/hadoop/hbase/security/access/Permission$Action;)V
* AccessControlClient.grant ( Configuration conf, String userName, Permission.Action... actions ) [static] : void *DEPRECATED*
org/apache/hadoop/hbase/security/access/AccessControlClient.grant:(Lorg/apache/hadoop/conf/Configuration;Ljava/lang/String;[Lorg/apache/hadoop/hbase/security/access/Permission$Action;)V
* AccessControlClient.grant ( Configuration conf, TableName tableName, String userName, byte[ ] family, byte[ ] qual,Permission.Action... actions ) [static] : void *DEPRECATED*
org/apache/hadoop/hbase/security/access/AccessControlClient.grant:(Lorg/apache/hadoop/conf/Configuration;Lorg/apache/hadoop/hbase/TableName;Ljava/lang/String;[B[B[Lorg/apache/hadoop/hbase/security/access/Permission$Action;)V
* AccessControlClient.isAccessControllerRunning ( Configuration conf ) [static] : boolean *DEPRECATED*
org/apache/hadoop/hbase/security/access/AccessControlClient.isAccessControllerRunning:(Lorg/apache/hadoop/conf/Configuration;)Z
* AccessControlClient.revoke ( Configuration conf, String namespace, String userName, Permission.Action... actions )[static] : void *DEPRECATED*
org/apache/hadoop/hbase/security/access/AccessControlClient.revoke:(Lorg/apache/hadoop/conf/Configuration;Ljava/lang/String;Ljava/lang/String;[Lorg/apache/hadoop/hbase/security/access/Permission$Action;)V
* AccessControlClient.revoke ( Configuration conf, String userName, Permission.Action... actions ) [static] : void *DEPRECATED*
org/apache/hadoop/hbase/security/access/AccessControlClient.revoke:(Lorg/apache/hadoop/conf/Configuration;Ljava/lang/String;[Lorg/apache/hadoop/hbase/security/access/Permission$Action;)V
* AccessControlClient.revoke ( Configuration conf, TableName tableName, String username, byte[ ] family, byte[ ] qualifier,Permission.Action... actions ) [static] : void *DEPRECATED*
org/apache/hadoop/hbase/security/access/AccessControlClient.revoke:(Lorg/apache/hadoop/conf/Configuration;Lorg/apache/hadoop/hbase/TableName;Ljava/lang/String;[B[B[Lorg/apache/hadoop/hbase/security/access/Permission$Action;)V
* HBASE-18731: [compat 1-2] Mark protected methods of QuotaSettings that touch Protobuf internals as IA.Private

View File

@ -1,361 +0,0 @@
////
/**
*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
////
[appendix]
== HFile format
:doctype: book
:numbered:
:toc: left
:icons: font
:experimental:
:toc: left
:source-language: java
This appendix describes the evolution of the HFile format.
[[hfilev1]]
=== HBase File Format (version 1)
As we will be discussing changes to the HFile format, it is useful to give a short overview of the original (HFile version 1) format.
[[hfilev1.overview]]
==== Overview of Version 1
An HFile in version 1 format is structured as follows:
.HFile V1 Format
image::hfile.png[HFile Version 1]
==== Block index format in version 1
The block index in version 1 is very straightforward.
For each entry, it contains:
. Offset (long)
. Uncompressed size (int)
. Key (a serialized byte array written using Bytes.writeByteArray)
.. Key length as a variable-length integer (VInt)
.. Key bytes
The number of entries in the block index is stored in the fixed file trailer, and has to be passed in to the method that reads the block index.
One of the limitations of the block index in version 1 is that it does not provide the compressed size of a block, which turns out to be necessary for decompression.
Therefore, the HFile reader has to infer this compressed size from the offset difference between blocks.
We fix this limitation in version 2, where we store on-disk block size instead of uncompressed size, and get uncompressed size from the block header.
[[hfilev2]]
=== HBase file format with inline blocks (version 2)
Note: this feature was introduced in HBase 0.92
==== Motivation
We found it necessary to revise the HFile format after encountering high memory usage and slow startup times caused by large Bloom filters and block indexes in the region server.
Bloom filters can get as large as 100 MB per HFile, which adds up to 2 GB when aggregated over 20 regions.
Block indexes can grow as large as 6 GB in aggregate size over the same set of regions.
A region is not considered opened until all of its block index data is loaded.
Large Bloom filters produce a different performance problem: the first get request that requires a Bloom filter lookup will incur the latency of loading the entire Bloom filter bit array.
To speed up region server startup we break Bloom filters and block indexes into multiple blocks and write those blocks out as they fill up, which also reduces the HFile writer's memory footprint.
In the Bloom filter case, "filling up a block" means accumulating enough keys to efficiently utilize a fixed-size bit array, and in the block index case we accumulate an "index block" of the desired size.
Bloom filter blocks and index blocks (we call these "inline blocks") become interspersed with data blocks, and as a side effect we can no longer rely on the difference between block offsets to determine data block length, as it was done in version 1.
HFile is a low-level file format by design, and it should not deal with application-specific details such as Bloom filters, which are handled at StoreFile level.
Therefore, we call Bloom filter blocks in an HFile "inline" blocks.
We also supply HFile with an interface to write those inline blocks.
Another format modification aimed at reducing the region server startup time is to use a contiguous "load-on-open" section that has to be loaded in memory at the time an HFile is being opened.
Currently, as an HFile opens, there are separate seek operations to read the trailer, data/meta indexes, and file info.
To read the Bloom filter, there are two more seek operations for its "data" and "meta" portions.
In version 2, we seek once to read the trailer and seek again to read everything else we need to open the file from a contiguous block.
[[hfilev2.overview]]
==== Overview of Version 2
The version of HBase introducing the above features reads both version 1 and 2 HFiles, but only writes version 2 HFiles.
A version 2 HFile is structured as follows:
.HFile Version 2 Structure
image::hfilev2.png[HFile Version 2]
==== Unified version 2 block format
In the version 2 every block in the data section contains the following fields:
. 8 bytes: Block type, a sequence of bytes equivalent to version 1's "magic records". Supported block types are:
.. DATA data blocks
.. LEAF_INDEX leaf-level index blocks in a multi-level-block-index
.. BLOOM_CHUNK Bloom filter chunks
.. META meta blocks (not used for Bloom filters in version 2 anymore)
.. INTERMEDIATE_INDEX intermediate-level index blocks in a multi-level blockindex
.. ROOT_INDEX root-level index blocks in a multi-level block index
.. FILE_INFO the ''file info'' block, a small key-value map of metadata
.. BLOOM_META a Bloom filter metadata block in the load-on-open section
.. TRAILER a fixed-size file trailer.
As opposed to the above, this is not an HFile v2 block but a fixed-size (for each HFile version) data structure
.. INDEX_V1 this block type is only used for legacy HFile v1 block
. Compressed size of the block's data, not including the header (int).
+
Can be used for skipping the current data block when scanning HFile data.
. Uncompressed size of the block's data, not including the header (int)
+
This is equal to the compressed size if the compression algorithm is NONE
. File offset of the previous block of the same type (long)
+
Can be used for seeking to the previous data/index block
. Compressed data (or uncompressed data if the compression algorithm is NONE).
The above format of blocks is used in the following HFile sections:
Scanned block section::
The section is named so because it contains all data blocks that need to be read when an HFile is scanned sequentially.
Also contains Leaf index blocks and Bloom chunk blocks.
Non-scanned block section::
This section still contains unified-format v2 blocks but it does not have to be read when doing a sequential scan.
This section contains "meta" blocks and intermediate-level index blocks.
We are supporting "meta" blocks in version 2 the same way they were supported in version 1, even though we do not store Bloom filter data in these blocks anymore.
==== Block index in version 2
There are three types of block indexes in HFile version 2, stored in two different formats (root and non-root):
. Data index -- version 2 multi-level block index, consisting of:
.. Version 2 root index, stored in the data block index section of the file
.. Optionally, version 2 intermediate levels, stored in the non-root format in the data index section of the file. Intermediate levels can only be present if leaf level blocks are present
.. Optionally, version 2 leaf levels, stored in the non-root format inline with data blocks
. Meta index -- version 2 root index format only, stored in the meta index section of the file
. Bloom index -- version 2 root index format only, stored in the ''load-on-open'' section as part of Bloom filter metadata.
==== Root block index format in version 2
This format applies to:
. Root level of the version 2 data index
. Entire meta and Bloom indexes in version 2, which are always single-level.
A version 2 root index block is a sequence of entries of the following format, similar to entries of a version 1 block index, but storing on-disk size instead of uncompressed size.
. Offset (long)
+
This offset may point to a data block or to a deeper-level index block.
. On-disk size (int)
. Key (a serialized byte array stored using Bytes.writeByteArray)
+
. Key (VInt)
. Key bytes
A single-level version 2 block index consists of just a single root index block.
To read a root index block of version 2, one needs to know the number of entries.
For the data index and the meta index the number of entries is stored in the trailer, and for the Bloom index it is stored in the compound Bloom filter metadata.
For a multi-level block index we also store the following fields in the root index block in the load-on-open section of the HFile, in addition to the data structure described above:
. Middle leaf index block offset
. Middle leaf block on-disk size (meaning the leaf index block containing the reference to the ''middle'' data block of the file)
. The index of the mid-key (defined below) in the middle leaf-level block.
These additional fields are used to efficiently retrieve the mid-key of the HFile used in HFile splits, which we define as the first key of the block with a zero-based index of (n 1) / 2, if the total number of blocks in the HFile is n.
This definition is consistent with how the mid-key was determined in HFile version 1, and is reasonable in general, because blocks are likely to be the same size on average, but we don't have any estimates on individual key/value pair sizes.
When writing a version 2 HFile, the total number of data blocks pointed to by every leaf-level index block is kept track of.
When we finish writing and the total number of leaf-level blocks is determined, it is clear which leaf-level block contains the mid-key, and the fields listed above are computed.
When reading the HFile and the mid-key is requested, we retrieve the middle leaf index block (potentially from the block cache) and get the mid-key value from the appropriate position inside that leaf block.
==== Non-root block index format in version 2
This format applies to intermediate-level and leaf index blocks of a version 2 multi-level data block index.
Every non-root index block is structured as follows.
. numEntries: the number of entries (int).
. entryOffsets: the "secondary index" of offsets of entries in the block, to facilitate
a quick binary search on the key (`numEntries + 1` int values). The last value
is the total length of all entries in this index block. For example, in a non-root
index block with entry sizes 60, 80, 50 the "secondary index" will contain the
following int array: `{0, 60, 140, 190}`.
. Entries.
Each entry contains:
+
.. Offset of the block referenced by this entry in the file (long)
.. On-disk size of the referenced block (int)
.. Key.
The length can be calculated from entryOffsets.
==== Bloom filters in version 2
In contrast with version 1, in a version 2 HFile Bloom filter metadata is stored in the load-on-open section of the HFile for quick startup.
. A compound Bloom filter.
+
. Bloom filter version = 3 (int). There used to be a DynamicByteBloomFilter class that had the Bloom filter version number 2
. The total byte size of all compound Bloom filter chunks (long)
. Number of hash functions (int)
. Type of hash functions (int)
. The total key count inserted into the Bloom filter (long)
. The maximum total number of keys in the Bloom filter (long)
. The number of chunks (int)
. Comparator class used for Bloom filter keys, a UTF>8 encoded string stored using Bytes.writeByteArray
. Bloom block index in the version 2 root block index format
==== File Info format in versions 1 and 2
The file info block is a serialized map from byte arrays to byte arrays, with the following keys, among others.
StoreFile-level logic adds more keys to this.
[cols="1,1", frame="all"]
|===
|hfile.LASTKEY| The last key of the file (byte array)
|hfile.AVG_KEY_LEN| The average key length in the file (int)
|hfile.AVG_VALUE_LEN| The average value length in the file (int)
|===
In version 2, we did not change the file format, but we moved the file info to
the final section of the file, which can be loaded as one block when the HFile
is being opened.
Also, we do not store the comparator in the version 2 file info anymore.
Instead, we store it in the fixed file trailer.
This is because we need to know the comparator at the time of parsing the load-on-open section of the HFile.
==== Fixed file trailer format differences between versions 1 and 2
The following table shows common and different fields between fixed file trailers in versions 1 and 2.
Note that the size of the trailer is different depending on the version, so it is ''fixed'' only within one version.
However, the version is always stored as the last four-byte integer in the file.
.Differences between HFile Versions 1 and 2
[cols="1,1", frame="all"]
|===
| Version 1 | Version 2
| |File info offset (long)
| Data index offset (long)
| loadOnOpenOffset (long) /The offset of the section that we need to load when opening the file./
| | Number of data index entries (int)
| metaIndexOffset (long) /This field is not being used by the version 1 reader, so we removed it from version 2./ | uncompressedDataIndexSize (long) /The total uncompressed size of the whole data block index, including root-level, intermediate-level, and leaf-level blocks./
| | Number of meta index entries (int)
| | Total uncompressed bytes (long)
| numEntries (int) | numEntries (long)
| Compression codec: 0 = LZO, 1 = GZ, 2 = NONE (int) | Compression codec: 0 = LZO, 1 = GZ, 2 = NONE (int)
| | The number of levels in the data block index (int)
| | firstDataBlockOffset (long) /The offset of the first data block. Used when scanning./
| | lastDataBlockEnd (long) /The offset of the first byte after the last key/value data block. We don't need to go beyond this offset when scanning./
| Version: 1 (int) | Version: 2 (int)
|===
==== getShortMidpointKey(an optimization for data index block)
Note: this optimization was introduced in HBase 0.95+
HFiles contain many blocks that contain a range of sorted Cells.
Each cell has a key.
To save IO when reading Cells, the HFile also has an index that maps a Cell's start key to the offset of the beginning of a particular block.
Prior to this optimization, HBase would use the key of the first cell in each data block as the index key.
In HBASE-7845, we generate a new key that is lexicographically larger than the last key of the previous block and lexicographically equal or smaller than the start key of the current block.
While actual keys can potentially be very long, this "fake key" or "virtual key" can be much shorter.
For example, if the stop key of previous block is "the quick brown fox", the start key of current block is "the who", we could use "the r" as our virtual key in our hfile index.
There are two benefits to this:
* having shorter keys reduces the hfile index size, (allowing us to keep more indexes in memory), and
* using something closer to the end key of the previous block allows us to avoid a potential extra IO when the target key lives in between the "virtual key" and the key of the first element in the target block.
This optimization (implemented by the getShortMidpointKey method) is inspired by LevelDB's ByteWiseComparatorImpl::FindShortestSeparator() and FindShortSuccessor().
[[hfilev3]]
=== HBase File Format with Security Enhancements (version 3)
Note: this feature was introduced in HBase 0.98
[[hfilev3.motivation]]
==== Motivation
Version 3 of HFile makes changes needed to ease management of encryption at rest and cell-level metadata (which in turn is needed for cell-level ACLs and cell-level visibility labels). For more information see <<hbase.encryption.server,hbase.encryption.server>>, <<hbase.tags,hbase.tags>>, <<hbase.accesscontrol.configuration,hbase.accesscontrol.configuration>>, and <<hbase.visibility.labels,hbase.visibility.labels>>.
[[hfilev3.overview]]
==== Overview
The version of HBase introducing the above features reads HFiles in versions 1, 2, and 3 but only writes version 3 HFiles.
Version 3 HFiles are structured the same as version 2 HFiles.
For more information see <<hfilev2.overview,hfilev2.overview>>.
[[hvilev3.infoblock]]
==== File Info Block in Version 3
Version 3 added two additional pieces of information to the reserved keys in the file info block.
[cols="1,1", frame="all"]
|===
| hfile.MAX_TAGS_LEN | The maximum number of bytes needed to store the serialized tags for any single cell in this hfile (int)
| hfile.TAGS_COMPRESSED | Does the block encoder for this hfile compress tags? (boolean). Should only be present if hfile.MAX_TAGS_LEN is also present.
|===
When reading a Version 3 HFile the presence of `MAX_TAGS_LEN` is used to determine how to deserialize the cells within a data block.
Therefore, consumers must read the file's info block prior to reading any data blocks.
When writing a Version 3 HFile, HBase will always include `MAX_TAGS_LEN` when flushing the memstore to underlying filesystem.
When compacting extant files, the default writer will omit `MAX_TAGS_LEN` if all of the files selected do not themselves contain any cells with tags.
See <<compaction,compaction>> for details on the compaction file selection algorithm.
[[hfilev3.datablock]]
==== Data Blocks in Version 3
Within an HFile, HBase cells are stored in data blocks as a sequence of KeyValues (see <<hfilev1.overview,hfilev1.overview>>, or link:http://www.larsgeorge.com/2009/10/hbase-architecture-101-storage.html[Lars George's
excellent introduction to HBase Storage]). In version 3, these KeyValue optionally will include a set of 0 or more tags:
[cols="1,1", frame="all"]
|===
| Version 1 & 2, Version 3 without MAX_TAGS_LEN | Version 3 with MAX_TAGS_LEN
2+| Key Length (4 bytes)
2+| Value Length (4 bytes)
2+| Key bytes (variable)
2+| Value bytes (variable)
| | Tags Length (2 bytes)
| | Tags bytes (variable)
|===
If the info block for a given HFile contains an entry for `MAX_TAGS_LEN` each cell will have the length of that cell's tags included, even if that length is zero.
The actual tags are stored as a sequence of tag length (2 bytes), tag type (1 byte), tag bytes (variable). The format an individual tag's bytes depends on the tag type.
Note that the dependence on the contents of the info block implies that prior to reading any data blocks you must first process a file's info block.
It also implies that prior to writing a data block you must know if the file's info block will include `MAX_TAGS_LEN`.
[[hfilev3.fixedtrailer]]
==== Fixed File Trailer in Version 3
The fixed file trailers written with HFile version 3 are always serialized with protocol buffers.
Additionally, it adds an optional field to the version 2 protocol buffer named encryption_key.
If HBase is configured to encrypt HFiles this field will store a data encryption key for this particular HFile, encrypted with the current cluster master key using AES.
For more information see <<hbase.encryption.server,hbase.encryption.server>>.
:numbered:

File diff suppressed because it is too large Load Diff

View File

@ -1,47 +0,0 @@
////
/**
*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
////
[appendix]
[[asf]]
== HBase and the Apache Software Foundation
:doctype: book
:numbered:
:toc: left
:icons: font
:experimental:
:toc: left
:source-language: java
HBase is a project in the Apache Software Foundation and as such there are responsibilities to the ASF to ensure a healthy project.
[[asf.devprocess]]
=== ASF Development Process
See the link:https://www.apache.org/dev/#committers[Apache Development Process page] for all sorts of information on how the ASF is structured (e.g., PMC, committers, contributors), to tips on contributing and getting involved, and how open-source works at ASF.
[[asf.reporting]]
=== ASF Board Reporting
Once a quarter, each project in the ASF portfolio submits a report to the ASF board.
This is done by the HBase project lead and the committers.
See link:https://www.apache.org/foundation/board/reporting[ASF board reporting] for more information.
:numbered:

View File

@ -1,170 +0,0 @@
////
/**
*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
////
[[casestudies]]
= Apache HBase Case Studies
:doctype: book
:numbered:
:toc: left
:icons: font
:experimental:
[[casestudies.overview]]
== Overview
This chapter will describe a variety of performance and troubleshooting case studies that can provide a useful blueprint on diagnosing Apache HBase cluster issues.
For more information on Performance and Troubleshooting, see <<performance>> and <<trouble>>.
[[casestudies.schema]]
== Schema Design
See the schema design case studies here: <<schema.casestudies>>
[[casestudies.perftroub]]
== Performance/Troubleshooting
[[casestudies.slownode]]
=== Case Study #1 (Performance Issue On A Single Node)
==== Scenario
Following a scheduled reboot, one data node began exhibiting unusual behavior.
Routine MapReduce jobs run against HBase tables which regularly completed in five or six minutes began taking 30 or 40 minutes to finish.
These jobs were consistently found to be waiting on map and reduce tasks assigned to the troubled data node (e.g., the slow map tasks all had the same Input Split). The situation came to a head during a distributed copy, when the copy was severely prolonged by the lagging node.
==== Hardware
.Datanodes:
* Two 12-core processors
* Six Enterprise SATA disks
* 24GB of RAM
* Two bonded gigabit NICs
.Network:
* 10 Gigabit top-of-rack switches
* 20 Gigabit bonded interconnects between racks.
==== Hypotheses
===== HBase "Hot Spot" Region
We hypothesized that we were experiencing a familiar point of pain: a "hot spot" region in an HBase table, where uneven key-space distribution can funnel a huge number of requests to a single HBase region, bombarding the RegionServer process and cause slow response time.
Examination of the HBase Master status page showed that the number of HBase requests to the troubled node was almost zero.
Further, examination of the HBase logs showed that there were no region splits, compactions, or other region transitions in progress.
This effectively ruled out a "hot spot" as the root cause of the observed slowness.
===== HBase Region With Non-Local Data
Our next hypothesis was that one of the MapReduce tasks was requesting data from HBase that was not local to the DataNode, thus forcing HDFS to request data blocks from other servers over the network.
Examination of the DataNode logs showed that there were very few blocks being requested over the network, indicating that the HBase region was correctly assigned, and that the majority of the necessary data was located on the node.
This ruled out the possibility of non-local data causing a slowdown.
===== Excessive I/O Wait Due To Swapping Or An Over-Worked Or Failing Hard Disk
After concluding that the Hadoop and HBase were not likely to be the culprits, we moved on to troubleshooting the DataNode's hardware.
Java, by design, will periodically scan its entire memory space to do garbage collection.
If system memory is heavily overcommitted, the Linux kernel may enter a vicious cycle, using up all of its resources swapping Java heap back and forth from disk to RAM as Java tries to run garbage collection.
Further, a failing hard disk will often retry reads and/or writes many times before giving up and returning an error.
This can manifest as high iowait, as running processes wait for reads and writes to complete.
Finally, a disk nearing the upper edge of its performance envelope will begin to cause iowait as it informs the kernel that it cannot accept any more data, and the kernel queues incoming data into the dirty write pool in memory.
However, using `vmstat(1)` and `free(1)`, we could see that no swap was being used, and the amount of disk IO was only a few kilobytes per second.
===== Slowness Due To High Processor Usage
Next, we checked to see whether the system was performing slowly simply due to very high computational load. `top(1)` showed that the system load was higher than normal, but `vmstat(1)` and `mpstat(1)` showed that the amount of processor being used for actual computation was low.
===== Network Saturation (The Winner)
Since neither the disks nor the processors were being utilized heavily, we moved on to the performance of the network interfaces.
The DataNode had two gigabit ethernet adapters, bonded to form an active-standby interface. `ifconfig(8)` showed some unusual anomalies, namely interface errors, overruns, framing errors.
While not unheard of, these kinds of errors are exceedingly rare on modern hardware which is operating as it should:
----
$ /sbin/ifconfig bond0
bond0 Link encap:Ethernet HWaddr 00:00:00:00:00:00
inet addr:10.x.x.x Bcast:10.x.x.255 Mask:255.255.255.0
UP BROADCAST RUNNING MASTER MULTICAST MTU:1500 Metric:1
RX packets:2990700159 errors:12 dropped:0 overruns:1 frame:6 <--- Look Here! Errors!
TX packets:3443518196 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:2416328868676 (2.4 TB) TX bytes:3464991094001 (3.4 TB)
----
These errors immediately lead us to suspect that one or more of the ethernet interfaces might have negotiated the wrong line speed.
This was confirmed both by running an ICMP ping from an external host and observing round-trip-time in excess of 700ms, and by running `ethtool(8)` on the members of the bond interface and discovering that the active interface was operating at 100Mbs/, full duplex.
----
$ sudo ethtool eth0
Settings for eth0:
Supported ports: [ TP ]
Supported link modes: 10baseT/Half 10baseT/Full
100baseT/Half 100baseT/Full
1000baseT/Full
Supports auto-negotiation: Yes
Advertised link modes: 10baseT/Half 10baseT/Full
100baseT/Half 100baseT/Full
1000baseT/Full
Advertised pause frame use: No
Advertised auto-negotiation: Yes
Link partner advertised link modes: Not reported
Link partner advertised pause frame use: No
Link partner advertised auto-negotiation: No
Speed: 100Mb/s <--- Look Here! Should say 1000Mb/s!
Duplex: Full
Port: Twisted Pair
PHYAD: 1
Transceiver: internal
Auto-negotiation: on
MDI-X: Unknown
Supports Wake-on: umbg
Wake-on: g
Current message level: 0x00000003 (3)
Link detected: yes
----
In normal operation, the ICMP ping round trip time should be around 20ms, and the interface speed and duplex should read, "1000MB/s", and, "Full", respectively.
==== Resolution
After determining that the active ethernet adapter was at the incorrect speed, we used the `ifenslave(8)` command to make the standby interface the active interface, which yielded an immediate improvement in MapReduce performance, and a 10 times improvement in network throughput:
On the next trip to the datacenter, we determined that the line speed issue was ultimately caused by a bad network cable, which was replaced.
[[casestudies.perf.1]]
=== Case Study #2 (Performance Research 2012)
Investigation results of a self-described "we're not sure what's wrong, but it seems slow" problem. http://gbif.blogspot.com/2012/03/hbase-performance-evaluation-continued.html
[[casestudies.perf.2]]
=== Case Study #3 (Performance Research 2010))
Investigation results of general cluster performance from 2010.
Although this research is on an older version of the codebase, this writeup is still very useful in terms of approach. http://hstack.org/hbase-performance-testing/
[[casestudies.max.transfer.threads]]
=== Case Study #4 (max.transfer.threads Config)
Case study of configuring `max.transfer.threads` (previously known as `xcievers`) and diagnosing errors from misconfigurations. http://www.larsgeorge.com/2012/03/hadoop-hbase-and-xceivers.html
See also <<dfs.datanode.max.transfer.threads>>.

View File

@ -1,107 +0,0 @@
////
/**
*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
////
[[community]]
= Community
:doctype: book
:numbered:
:toc: left
:icons: font
:experimental:
== Decisions
.Feature Branches
Feature Branches are easy to make.
You do not have to be a committer to make one.
Just request the name of your branch be added to JIRA up on the developer's mailing list and a committer will add it for you.
Thereafter you can file issues against your feature branch in Apache HBase JIRA.
Your code you keep elsewhere -- it should be public so it can be observed -- and you can update dev mailing list on progress.
When the feature is ready for commit, 3 +1s from committers will get your feature merged.
See link:https://lists.apache.org/thread.html/200513c7e7e4df23c8b9134eeee009d61205c79314e77f222d396006%401346870308%40%3Cdev.hbase.apache.org%3E[HBase, mail # dev - Thoughts
about large feature dev branches]
[[hbase.fix.version.in.jira]]
.How to set fix version in JIRA on issue resolve
Here is how we agreed to set versions in JIRA when we resolve an issue.
If master is going to be 2.0.0, and branch-1 1.4.0 then:
* Commit only to master: Mark with 2.0.0
* Commit to branch-1 and master: Mark with 2.0.0, and 1.4.0
* Commit to branch-1.3, branch-1, and master: Mark with 2.0.0, 1.4.0, and 1.3.x
* Commit site fixes: no version
[[hbase.when.to.close.jira]]
.Policy on when to set a RESOLVED JIRA as CLOSED
We agreed that for issues that list multiple releases in their _Fix Version/s_ field, CLOSE the issue on the release of any of the versions listed; subsequent change to the issue must happen in a new JIRA.
[[no.permanent.state.in.zk]]
.Only transient state in ZooKeeper!
You should be able to kill the data in zookeeper and hbase should ride over it recreating the zk content as it goes.
This is an old adage around these parts.
We just made note of it now.
We also are currently in violation of this basic tenet -- replication at least keeps permanent state in zk -- but we are working to undo this breaking of a golden rule.
[[community.roles]]
== Community Roles
=== Release Managers
Each maintained release branch has a release manager, who volunteers to coordinate new features and bug fixes are backported to that release.
The release managers are link:https://hbase.apache.org/team-list.html[committers].
If you would like your feature or bug fix to be included in a given release, communicate with that release manager.
If this list goes out of date or you can't reach the listed person, reach out to someone else on the list.
NOTE: End-of-life releases are not included in this list.
.Release Managers
[cols="1,1", options="header"]
|===
| Release
| Release Manager
| 1.3
| Mikhail Antonov
| 1.4
| Andrew Purtell
| 2.2
| Guanghao Zhang
| 2.3
| Nick Dimiduk
|===
[[hbase.commit.msg.format]]
== Commit Message format
We agreed to the following Git commit message format:
[source]
----
HBASE-xxxxx <title>. (<contributor>)
----
If the person making the commit is the contributor, leave off the '(<contributor>)' element.

View File

@ -1,650 +0,0 @@
////
/**
*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
////
[appendix]
[[compression]]
== Compression and Data Block Encoding In HBase(((Compression,Data BlockEncoding)))
:doctype: book
:numbered:
:toc: left
:icons: font
:experimental:
NOTE: Codecs mentioned in this section are for encoding and decoding data blocks or row keys.
For information about replication codecs, see <<cluster.replication.preserving.tags,cluster.replication.preserving.tags>>.
HBase supports several different compression algorithms which can be enabled on a ColumnFamily.
Data block encoding attempts to limit duplication of information in keys, taking advantage of some of the fundamental designs and patterns of HBase, such as sorted row keys and the schema of a given table.
Compressors reduce the size of large, opaque byte arrays in cells, and can significantly reduce the storage space needed to store uncompressed data.
Compressors and data block encoding can be used together on the same ColumnFamily.
.Changes Take Effect Upon Compaction
If you change compression or encoding for a ColumnFamily, the changes take effect during compaction.
Some codecs take advantage of capabilities built into Java, such as GZip compression.
Others rely on native libraries. Native libraries may be available via codec dependencies installed into
HBase's library directory, or, if you are utilizing Hadoop codecs, as part of Hadoop. Hadoop codecs
typically have a native code component so follow instructions for installing Hadoop native binary
support at <<hadoop.native.lib>>.
This section discusses common codecs that are used and tested with HBase.
No matter what codec you use, be sure to test that it is installed correctly and is available on all nodes in your cluster.
Extra operational steps may be necessary to be sure that codecs are available on newly-deployed nodes.
You can use the <<compression.test,compression.test>> utility to check that a given codec is correctly installed.
To configure HBase to use a compressor, see <<compressor.install,compressor.install>>.
To enable a compressor for a ColumnFamily, see <<changing.compression,changing.compression>>.
To enable data block encoding for a ColumnFamily, see <<data.block.encoding.enable,data.block.encoding.enable>>.
.Block Compressors
* NONE
+
This compression type constant selects no compression, and is the default.
* BROTLI
+
https://en.wikipedia.org/wiki/Brotli[Brotli] is a generic-purpose lossless compression algorithm
that compresses data using a combination of a modern variant of the LZ77 algorithm, Huffman
coding, and 2nd order context modeling, with a compression ratio comparable to the best currently
available general-purpose compression methods. It is similar in speed with GZ but offers more
dense compression.
* BZIP2
+
https://en.wikipedia.org/wiki/Bzip2[Bzip2] compresses files using the Burrows-Wheeler block
sorting text compression algorithm and Huffman coding. Compression is generally considerably
better than that achieved by the dictionary- (LZ-) based compressors, but both compression and
decompression can be slow in comparison to other options.
* GZ
+
gzip is based on the https://en.wikipedia.org/wiki/Deflate[DEFLATE] algorithm, which is a
combination of LZ77 and Huffman coding. It is universally available in the Java Runtime
Environment so is a good lowest common denominator option. However in comparison to more modern
algorithms like Zstandard it is quite slow.
* LZ4
+
https://en.wikipedia.org/wiki/LZ4_(compression_algorithm)[LZ4] is a lossless data compression
algorithm that is focused on compression and decompression speed. It belongs to the LZ77 family
of compression algorithms, like Brotli, DEFLATE, Zstandard, and others. In our microbenchmarks
LZ4 is the fastest option for both compression and decompression in that family, and is our
universally recommended option.
* LZMA
+
https://en.wikipedia.org/wiki/Lempel%E2%80%93Ziv%E2%80%93Markov_chain_algorithm[LZMA] is a
dictionary compression scheme somewhat similar to the LZ77 algorithm that achieves very high
compression ratios with a computationally expensive predictive model and variable size
compression dictionary, while still maintaining decompression speed similar to other commonly used
compression algorithms. LZMA is superior to all other options in general compression ratio but as
a compressor it can be extremely slow, especially when configured to operate at higher levels of
compression.
* LZO
+
https://en.wikipedia.org/wiki/Lempel%E2%80%93Ziv%E2%80%93Oberhumer[LZO] is another LZ-variant
data compression algorithm, with an implementation focused on decompression speed. It is almost
but not quite as fast as LZ4.
* SNAPPY
+
https://en.wikipedia.org/wiki/Snappy_(compression)[Snappy] is based on ideas from LZ77 but is
optimized for very high compression speed, achieving only a "reasonable" compression in trade.
It is as fast as LZ4 but does not compress quite as well. We offer a pure Java Snappy codec
that can be used instead of GZ as the universally available option for any Java runtime on any
hardware architecture.
* ZSTD
+
https://en.wikipedia.org/wiki/Zstd[Zstandard] combines a dictionary-matching stage (LZ77) with
a large search window and a fast entropy coding stage, using both Finite State Entropy and
Huffman coding. Compression speed can vary by a factor of 20 or more between the fastest and
slowest levels, while decompression is uniformly fast, varying by less than 20% between the
fastest and slowest levels.
+
ZStandard is the most flexible of the available compression codec options, offering a compression
ratio similar to LZ4 at level 1 (but with slightly less performance), compression ratios
comparable to DEFLATE at mid levels (but with better performance), and LZMA-alike dense
compression (and LZMA-alike compression speeds) at high levels; while providing universally fast
decompression.
.Data Block Encoding Types
Prefix::
Often, keys are very similar. Specifically, keys often share a common prefix and only differ near the end. For instance, one key might be `RowKey:Family:Qualifier0` and the next key might be `RowKey:Family:Qualifier1`.
+
In Prefix encoding, an extra column is added which holds the length of the prefix shared between the current key and the previous key.
Assuming the first key here is totally different from the key before, its prefix length is 0.
+
The second key's prefix length is `23`, since they have the first 23 characters in common.
+
Obviously if the keys tend to have nothing in common, Prefix will not provide much benefit.
+
The following image shows a hypothetical ColumnFamily with no data block encoding.
+
.ColumnFamily with No Encoding
image::data_block_no_encoding.png[]
+
Here is the same data with prefix data encoding.
+
.ColumnFamily with Prefix Encoding
image::data_block_prefix_encoding.png[]
Diff::
Diff encoding expands upon Prefix encoding.
Instead of considering the key sequentially as a monolithic series of bytes, each key field is split so that each part of the key can be compressed more efficiently.
+
Two new fields are added: timestamp and type.
+
If the ColumnFamily is the same as the previous row, it is omitted from the current row.
+
If the key length, value length or type are the same as the previous row, the field is omitted.
+
In addition, for increased compression, the timestamp is stored as a Diff from the previous row's timestamp, rather than being stored in full.
Given the two row keys in the Prefix example, and given an exact match on timestamp and the same type, neither the value length, or type needs to be stored for the second row, and the timestamp value for the second row is just 0, rather than a full timestamp.
+
Diff encoding is disabled by default because writing and scanning are slower but more data is cached.
+
This image shows the same ColumnFamily from the previous images, with Diff encoding.
+
.ColumnFamily with Diff Encoding
image::data_block_diff_encoding.png[]
Fast Diff::
Fast Diff works similar to Diff, but uses a faster implementation. It also adds another field which stores a single bit to track whether the data itself is the same as the previous row. If it is, the data is not stored again.
+
Fast Diff is the recommended codec to use if you have long keys or many columns.
+
The data format is nearly identical to Diff encoding, so there is not an image to illustrate it.
Prefix Tree::
Prefix tree encoding was introduced as an experimental feature in HBase 0.96.
It provides similar memory savings to the Prefix, Diff, and Fast Diff encoder, but provides faster random access at a cost of slower encoding speed.
It was removed in hbase-2.0.0. It was a good idea but little uptake. If interested in reviving this effort, write the hbase dev list.
[[data.block.encoding.types]]
=== Which Compressor or Data Block Encoder To Use
The compression or codec type to use depends on the characteristics of your data. Choosing the wrong type could cause your data to take more space rather than less, and can have performance implications.
In general, you need to weigh your options between smaller size and faster compression/decompression. Following are some general guidelines, expanded from a discussion at link:https://lists.apache.org/thread.html/481e67a61163efaaf4345510447a9244871a8d428244868345a155ff%401378926618%40%3Cdev.hbase.apache.org%3E[Documenting Guidance on compression and codecs].
* In most cases, enabling LZ4 or Snappy by default is a good choice, because they have a low
performance overhead and provide reasonable space savings. A fast compression algorithm almost
always improves overall system performance by trading some increased CPU usage for better I/O
efficiency.
* If the values are large (and not pre-compressed, such as images), use a data block compressor.
* For [firstterm]_cold data_, which is accessed infrequently, depending on your use case, it might
make sense to opt for Zstandard at its higher compression levels, or LZMA, especially for high
entropy binary data, or Brotli for data similar in characteristics to web data. Bzip2 might also
be a reasonable option but Zstandard is very likely to offer superior decompression speed.
* For [firstterm]_hot data_, which is accessed frequently, you almost certainly want only LZ4,
Snappy, LZO, or Zstandard at a low compression level. These options will not provide as high of
a compression ratio but will in trade not unduly impact system performance.
* If you have long keys (compared to the values) or many columns, use a prefix encoder.
FAST_DIFF is recommended.
* If enabling WAL value compression, consider LZ4 or SNAPPY compression, or Zstandard at
level 1. Reading and writing the WAL is performance critical. That said, the I/O
savings of these compression options can improve overall system performance.
[[hadoop.native.lib]]
=== Making use of Hadoop Native Libraries in HBase
The Hadoop shared library has a bunch of facility including compression libraries and fast crc'ing -- hardware crc'ing if your chipset supports it.
To make this facility available to HBase, do the following. HBase/Hadoop will fall back to use alternatives if it cannot find the native library
versions -- or fail outright if you asking for an explicit compressor and there is no alternative available.
First make sure of your Hadoop. Fix this message if you are seeing it starting Hadoop processes:
----
16/02/09 22:40:24 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
----
It means is not properly pointing at its native libraries or the native libs were compiled for another platform.
Fix this first.
Then if you see the following in your HBase logs, you know that HBase was unable to locate the Hadoop native libraries:
[source]
----
2014-08-07 09:26:20,139 WARN [main] util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
----
If the libraries loaded successfully, the WARN message does not show. Usually this means you are good to go but read on.
Let's presume your Hadoop shipped with a native library that suits the platform you are running HBase on.
To check if the Hadoop native library is available to HBase, run the following tool (available in Hadoop 2.1 and greater):
[source]
----
$ ./bin/hbase --config ~/conf_hbase org.apache.hadoop.util.NativeLibraryChecker
2014-08-26 13:15:38,717 WARN [main] util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Native library checking:
hadoop: false
zlib: false
snappy: false
lz4: false
bzip2: false
2014-08-26 13:15:38,863 INFO [main] util.ExitUtil: Exiting with status 1
----
Above shows that the native hadoop library is not available in HBase context.
The above NativeLibraryChecker tool may come back saying all is hunky-dory
-- i.e. all libs show 'true', that they are available -- but follow the below
presecription anyways to ensure the native libs are available in HBase context,
when it goes to use them.
To fix the above, either copy the Hadoop native libraries local or symlink to them if the Hadoop and HBase stalls are adjacent in the filesystem.
You could also point at their location by setting the `LD_LIBRARY_PATH` environment variable in your hbase-env.sh.
Where the JVM looks to find native libraries is "system dependent" (See `java.lang.System#loadLibrary(name)`). On linux, by default, is going to look in _lib/native/PLATFORM_ where `PLATFORM` is the label for the platform your HBase is installed on.
On a local linux machine, it seems to be the concatenation of the java properties `os.name` and `os.arch` followed by whether 32 or 64 bit.
HBase on startup prints out all of the java system properties so find the os.name and os.arch in the log.
For example:
[source]
----
...
2014-08-06 15:27:22,853 INFO [main] zookeeper.ZooKeeper: Client environment:os.name=Linux
2014-08-06 15:27:22,853 INFO [main] zookeeper.ZooKeeper: Client environment:os.arch=amd64
...
----
So in this case, the PLATFORM string is `Linux-amd64-64`.
Copying the Hadoop native libraries or symlinking at _lib/native/Linux-amd64-64_ will ensure they are found.
Rolling restart after you have made this change.
Here is an example of how you would set up the symlinks.
Let the hadoop and hbase installs be in your home directory. Assume your hadoop native libs
are at ~/hadoop/lib/native. Assume you are on a Linux-amd64-64 platform. In this case,
you would do the following to link the hadoop native lib so hbase could find them.
----
...
$ mkdir -p ~/hbaseLinux-amd64-64 -> /home/stack/hadoop/lib/native/lib/native/
$ cd ~/hbase/lib/native/
$ ln -s ~/hadoop/lib/native Linux-amd64-64
$ ls -la
# Linux-amd64-64 -> /home/USER/hadoop/lib/native
...
----
If you see PureJavaCrc32C in a stack track or if you see something like the below in a perf trace, then native is not working; you are using the java CRC functions rather than native:
----
5.02% perf-53601.map [.] Lorg/apache/hadoop/util/PureJavaCrc32C;.update
----
See link:https://issues.apache.org/jira/browse/HBASE-11927[HBASE-11927 Use Native Hadoop Library for HFile checksum (And flip default from CRC32 to CRC32C)],
for more on native checksumming support. See in particular the release note for how to check if your hardware to see if your processor has support for hardware CRCs.
Or checkout the Apache link:https://blogs.apache.org/hbase/entry/saving_cpu_using_native_hadoop[Checksums in HBase] blog post.
Here is example of how to point at the Hadoop libs with `LD_LIBRARY_PATH` environment variable:
[source]
----
$ LD_LIBRARY_PATH=~/hadoop-2.5.0-SNAPSHOT/lib/native ./bin/hbase --config ~/conf_hbase org.apache.hadoop.util.NativeLibraryChecker
2014-08-26 13:42:49,332 INFO [main] bzip2.Bzip2Factory: Successfully loaded & initialized native-bzip2 library system-native
2014-08-26 13:42:49,337 INFO [main] zlib.ZlibFactory: Successfully loaded & initialized native-zlib library
Native library checking:
hadoop: true /home/stack/hadoop-2.5.0-SNAPSHOT/lib/native/libhadoop.so.1.0.0
zlib: true /lib64/libz.so.1
snappy: true /usr/lib64/libsnappy.so.1
lz4: true revision:99
bzip2: true /lib64/libbz2.so.1
----
Set in _hbase-env.sh_ the LD_LIBRARY_PATH environment variable when starting your HBase.
=== Compressor Configuration, Installation, and Use
[[compressor.install]]
==== Configure HBase For Compressors
Compression codecs are provided either by HBase compressor modules or by Hadoop's native compression
support. As described above you choose a compression type in table or column family schema or in
site configuration using its short label, e.g. _snappy_ for Snappy, or _zstd_ for ZStandard. Which
codec implementation is dynamically loaded to support what label is configurable by way of site
configuration.
[options="header"]
|===
|Algorithm label|Codec implementation configuration key|Default value
//----------------------
|BROTLI|hbase.io.compress.brotli.codec|org.apache.hadoop.hbase.io.compress.brotli.BrotliCodec
|BZIP2|hbase.io.compress.bzip2.codec|org.apache.hadoop.io.compress.BZip2Codec
|GZ|hbase.io.compress.gz.codec|org.apache.hadoop.hbase.io.compress.ReusableStreamGzipCodec
|LZ4|hbase.io.compress.lz4.codec|org.apache.hadoop.io.compress.Lz4Codec
|LZMA|hbase.io.compress.lzma.codec|org.apache.hadoop.hbase.io.compress.xz.LzmaCodec
|LZO|hbase.io.compress.lzo.codec|com.hadoop.compression.lzo.LzoCodec
|SNAPPY|hbase.io.compress.snappy.codec|org.apache.hadoop.io.compress.SnappyCodec
|ZSTD|hbase.io.compress.zstd.codec|org.apache.hadoop.io.compress.ZStandardCodec
|===
The available codec implementation options are:
[options="header"]
|===
|Label|Codec implementation class|Notes
//----------------------
|BROTLI|org.apache.hadoop.hbase.io.compress.brotli.BrotliCodec|
Implemented with https://github.com/hyperxpro/Brotli4j[Brotli4j]
|BZIP2|org.apache.hadoop.io.compress.BZip2Codec|Hadoop native codec
|GZ|org.apache.hadoop.hbase.io.compress.ReusableStreamGzipCodec|
Requires the Hadoop native GZ codec
|LZ4|org.apache.hadoop.io.compress.Lz4Codec|Hadoop native codec
|LZ4|org.apache.hadoop.hbase.io.compress.aircompressor.Lz4Codec|
Pure Java implementation
|LZ4|org.apache.hadoop.hbase.io.compress.lz4.Lz4Codec|
Implemented with https://github.com/lz4/lz4-java[lz4-java]
|LZMA|org.apache.hadoop.hbase.io.compress.xz.LzmaCodec|
Implemented with https://tukaani.org/xz/java.html[XZ For Java]
|LZO|com.hadoop.compression.lzo.LzoCodec|Hadoop native codec,
requires GPL licensed native dependencies
|LZO|org.apache.hadoop.io.compress.LzoCodec|Hadoop native codec,
requires GPL licensed native dependencies
|LZO|org.apache.hadoop.hbase.io.compress.aircompressor.LzoCodec|
Pure Java implementation
|SNAPPY|org.apache.hadoop.io.compress.SnappyCodec|Hadoop native codec
|SNAPPY|org.apache.hadoop.hbase.io.compress.aircompressor.SnappyCodec|
Pure Java implementation
|SNAPPY|org.apache.hadoop.hbase.io.compress.xerial.SnappyCodec|
Implemented with https://github.com/xerial/snappy-java[snappy-java]
|ZSTD|org.apache.hadoop.io.compress.ZStandardCodec|Hadoop native codec
|ZSTD|org.apache.hadoop.hbase.io.compress.aircompressor.ZStdCodec|
Pure Java implementation, limited to a fixed compression level,
not data compatible with the Hadoop zstd codec
|ZSTD|org.apache.hadoop.hbase.io.compress.zstd.ZStdCodec|
Implemented with https://github.com/luben/zstd-jni[zstd-jni],
supports all compression levels, supports custom dictionaries
|===
Specify which codec implementation option you prefer for a given compression algorithm
in site configuration, like so:
[source]
----
...
<property>
<name>hbase.io.compress.lz4.codec</name>
<value>org.apache.hadoop.hbase.io.compress.lz4.Lz4Codec</value>
</property>
...
----
.Compressor Microbenchmarks
See https://github.com/apurtell/jmh-compression-tests
256MB (258,126,022 bytes exactly) of block data was extracted from two HFiles containing Common
Crawl data ingested using IntegrationLoadTestCommonCrawl, 2,680 blocks in total. This data was
processed by each new codec implementation as if the block data were being compressed again for
write into an HFile, but without writing any data, comparing only the CPU time and resource demand
of the codec itself. Absolute performance numbers will vary depending on hardware and software
particulars of your deployment. The relative differences are what are interesting. Measured time
is the average time in milliseconds required to compress all blocks of the 256MB file. This is
how long it would take to write the HFile containing these contents, minus the I/O overhead of
block encoding and actual persistence.
These are the results:
[options="header"]
|===
|Codec|Level|Time (milliseconds)|Result (bytes)|Improvement
//----------------------
|AirCompressor LZ4|-|349.989 ± 2.835|76,999,408|70.17%
|AirCompressor LZO|-|334.554 ± 3.243|79,369,805|69.25%
|AirCompressor Snappy|-|364.153 ± 19.718|80,201,763|68.93%
|AirCompressor Zstandard|3 (effective)|1108.267 ± 8.969|55,129,189|78.64%
|Brotli|1|593.107 ± 2.376|58,672,319|77.27%
|Brotli|3|1345.195 ± 27.327|53,917,438|79.11%
|Brotli|6|2812.411 ± 25.372|48,696,441|81.13%
|Brotli|10|74615.936 ± 224.854|44,970,710|82.58%
|LZ4 (lz4-java)|-|303.045 ± 0.783|76,974,364|70.18%
|LZMA|1|6410.428 ± 115.065|49,948,535|80.65%
|LZMA|3|8144.620 ± 152.119|49,109,363|80.97%
|LZMA|6|43802.576 ± 382.025|46,951,810|81.81%
|LZMA|9|49821.979 ± 580.110|46,951,810|81.81%
|Snappy (xerial)|-|360.225 ± 2.324|80,749,937|68.72%
|Zstd (zstd-jni)|1|654.699 ± 16.839|56,719,994|78.03%
|Zstd (zstd-jni)|3|839.160 ± 24.906|54,573,095|78.86%
|Zstd (zstd-jni)|5|1594.373 ± 22.384|52,025,485|79.84%
|Zstd (zstd-jni)|7|2308.705 ± 24.744|50,651,554|80.38%
|Zstd (zstd-jni)|9|3659.677 ± 58.018|50,208,425|80.55%
|Zstd (zstd-jni)|12|8705.294 ± 58.080|49,841,446|80.69%
|Zstd (zstd-jni)|15|19785.646 ± 278.080|48,499,508|81.21%
|Zstd (zstd-jni)|18|47702.097 ± 442.670|48,319,879|81.28%
|Zstd (zstd-jni)|22|97799.695 ± 1106.571|48,212,220|81.32%
|===
.Compressor Support On the Master
A new configuration setting was introduced in HBase 0.95, to check the Master to determine which data block encoders are installed and configured on it, and assume that the entire cluster is configured the same.
This option, `hbase.master.check.compression`, defaults to `true`.
This prevents the situation described in link:https://issues.apache.org/jira/browse/HBASE-6370[HBASE-6370], where a table is created or modified to support a codec that a region server does not support, leading to failures that take a long time to occur and are difficult to debug.
If `hbase.master.check.compression` is enabled, libraries for all desired compressors need to be installed and configured on the Master, even if the Master does not run a region server.
.Install GZ Support Via Native Libraries
HBase uses Java's built-in GZip support unless the native Hadoop libraries are available on the CLASSPATH.
The recommended way to add libraries to the CLASSPATH is to set the environment variable `HBASE_LIBRARY_PATH` for the user running HBase.
If native libraries are not available and Java's GZIP is used, `Got brand-new compressor` reports will be present in the logs.
See <<brand.new.compressor,brand.new.compressor>>).
[[lzo.compression]]
.Install Hadoop Native LZO Support
HBase cannot ship with the Hadoop native LZO codc because of incompatibility between HBase, which uses an Apache Software License (ASL) and LZO, which uses a GPL license.
See the link:https://github.com/twitter/hadoop-lzo/blob/master/README.md[Hadoop-LZO at Twitter] for information on configuring LZO support for HBase.
If you depend upon LZO compression, consider using the pure Java and ASL licensed
AirCompressor LZO codec option instead of the Hadoop native default, or configure your
RegionServers to fail to start if native LZO support is not available.
See <<hbase.regionserver.codecs,hbase.regionserver.codecs>>.
[[lz4.compression]]
.Configure Hadoop Native LZ4 Support
LZ4 support is bundled with Hadoop and is the default LZ4 codec implementation.
It is not required that you make use of the Hadoop LZ4 codec. Our LZ4 codec implemented
with lz4-java offers superior performance, and the AirCompressor LZ4 codec offers a
pure Java option for use where native support is not available.
That said, if you prefer the Hadoop option, make sure the hadoop shared library
(libhadoop.so) is accessible when you start HBase.
After configuring your platform (see <<hadoop.native.lib,hadoop.native.lib>>), you can
make a symbolic link from HBase to the native Hadoop libraries. This assumes the two
software installs are colocated. For example, if my 'platform' is Linux-amd64-64:
[source,bourne]
----
$ cd $HBASE_HOME
$ mkdir lib/native
$ ln -s $HADOOP_HOME/lib/native lib/native/Linux-amd64-64
----
Use the compression tool to check that LZ4 is installed on all nodes.
Start up (or restart) HBase.
Afterward, you can create and alter tables to enable LZ4 as a compression codec.:
----
hbase(main):003:0> alter 'TestTable', {NAME => 'info', COMPRESSION => 'LZ4'}
----
[[snappy.compression.installation]]
.Install Hadoop native Snappy Support
Snappy support is bundled with Hadoop and is the default Snappy codec implementation.
It is not required that you make use of the Hadoop Snappy codec. Our Snappy codec
implemented with Xerial Snappy offers superior performance, and the AirCompressor
Snappy codec offers a pure Java option for use where native support is not available.
That said, if you prefer the Hadoop codec option, you can install Snappy binaries (for
instance, by using +yum install snappy+ on CentOS) or build Snappy from source.
After installing Snappy, search for the shared library, which will be called _libsnappy.so.X_ where X is a number.
If you built from source, copy the shared library to a known location on your system, such as _/opt/snappy/lib/_.
In addition to the Snappy library, HBase also needs access to the Hadoop shared library, which will be called something like _libhadoop.so.X.Y_, where X and Y are both numbers.
Make note of the location of the Hadoop library, or copy it to the same location as the Snappy library.
[NOTE]
====
The Snappy and Hadoop libraries need to be available on each node of your cluster.
See <<compression.test,compression.test>> to find out how to test that this is the case.
See <<hbase.regionserver.codecs,hbase.regionserver.codecs>> to configure your RegionServers to fail to start if a given compressor is not available.
====
Each of these library locations need to be added to the environment variable `HBASE_LIBRARY_PATH` for the operating system user that runs HBase.
You need to restart the RegionServer for the changes to take effect.
[[compression.test]]
.CompressionTest
You can use the CompressionTest tool to verify that your compressor is available to HBase:
----
$ hbase org.apache.hadoop.hbase.util.CompressionTest hdfs://host/path/to/hbase snappy
----
[[hbase.regionserver.codecs]]
.Enforce Compression Settings On a RegionServer
You can configure a RegionServer so that it will fail to restart if compression is configured incorrectly, by adding the option hbase.regionserver.codecs to the _hbase-site.xml_, and setting its value to a comma-separated list of codecs that need to be available.
For example, if you set this property to `lzo,gz`, the RegionServer would fail to start if both compressors were not available.
This would prevent a new server from being added to the cluster without having codecs configured properly.
[[changing.compression]]
==== Enable Compression On a ColumnFamily
To enable compression for a ColumnFamily, use an `alter` command.
You do not need to re-create the table or copy data.
If you are changing codecs, be sure the old codec is still available until all the old StoreFiles have been compacted.
.Enabling Compression on a ColumnFamily of an Existing Table using HBaseShell
----
hbase> alter 'test', {NAME => 'cf', COMPRESSION => 'GZ'}
----
.Creating a New Table with Compression On a ColumnFamily
----
hbase> create 'test2', { NAME => 'cf2', COMPRESSION => 'SNAPPY' }
----
.Verifying a ColumnFamily's Compression Settings
----
hbase> describe 'test'
DESCRIPTION ENABLED
'test', {NAME => 'cf', DATA_BLOCK_ENCODING => 'NONE false
', BLOOMFILTER => 'ROW', REPLICATION_SCOPE => '0',
VERSIONS => '1', COMPRESSION => 'GZ', MIN_VERSIONS
=> '0', TTL => 'FOREVER', KEEP_DELETED_CELLS => 'fa
lse', BLOCKSIZE => '65536', IN_MEMORY => 'false', B
LOCKCACHE => 'true'}
1 row(s) in 0.1070 seconds
----
==== Testing Compression Performance
HBase includes a tool called LoadTestTool which provides mechanisms to test your compression performance.
You must specify either `-write` or `-update-read` as your first parameter, and if you do not specify another parameter, usage advice is printed for each option.
.+LoadTestTool+ Usage
----
$ bin/hbase org.apache.hadoop.hbase.util.LoadTestTool -h
usage: bin/hbase org.apache.hadoop.hbase.util.LoadTestTool <options>
Options:
-batchupdate Whether to use batch as opposed to separate
updates for every column in a row
-bloom <arg> Bloom filter type, one of [NONE, ROW, ROWCOL]
-compression <arg> Compression type, one of [LZO, GZ, NONE, SNAPPY,
LZ4]
-data_block_encoding <arg> Encoding algorithm (e.g. prefix compression) to
use for data blocks in the test column family, one
of [NONE, PREFIX, DIFF, FAST_DIFF, ROW_INDEX_V1].
-encryption <arg> Enables transparent encryption on the test table,
one of [AES]
-generator <arg> The class which generates load for the tool. Any
args for this class can be passed as colon
separated after class name
-h,--help Show usage
-in_memory Tries to keep the HFiles of the CF inmemory as far
as possible. Not guaranteed that reads are always
served from inmemory
-init_only Initialize the test table only, don't do any
loading
-key_window <arg> The 'key window' to maintain between reads and
writes for concurrent write/read workload. The
default is 0.
-max_read_errors <arg> The maximum number of read errors to tolerate
before terminating all reader threads. The default
is 10.
-multiput Whether to use multi-puts as opposed to separate
puts for every column in a row
-num_keys <arg> The number of keys to read/write
-num_tables <arg> A positive integer number. When a number n is
speicfied, load test tool will load n table
parallely. -tn parameter value becomes table name
prefix. Each table name is in format
<tn>_1...<tn>_n
-read <arg> <verify_percent>[:<#threads=20>]
-regions_per_server <arg> A positive integer number. When a number n is
specified, load test tool will create the test
table with n regions per server
-skip_init Skip the initialization; assume test table already
exists
-start_key <arg> The first key to read/write (a 0-based index). The
default value is 0.
-tn <arg> The name of the table to read or write
-update <arg> <update_percent>[:<#threads=20>][:<#whether to
ignore nonce collisions=0>]
-write <arg> <avg_cols_per_key>:<avg_data_size>[:<#threads=20>]
-zk <arg> ZK quorum as comma-separated host names without
port numbers
-zk_root <arg> name of parent znode in zookeeper
----
.Example Usage of LoadTestTool
----
$ hbase org.apache.hadoop.hbase.util.LoadTestTool -write 1:10:100 -num_keys 1000000
-read 100:30 -num_tables 1 -data_block_encoding NONE -tn load_test_tool_NONE
----
[[data.block.encoding.enable]]
=== Enable Data Block Encoding
Codecs are built into HBase so no extra configuration is needed.
Codecs are enabled on a table by setting the `DATA_BLOCK_ENCODING` property.
Disable the table before altering its DATA_BLOCK_ENCODING setting.
Following is an example using HBase Shell:
.Enable Data Block Encoding On a Table
----
hbase> alter 'test', { NAME => 'cf', DATA_BLOCK_ENCODING => 'FAST_DIFF' }
Updating all regions with the new schema...
0/1 regions updated.
1/1 regions updated.
Done.
0 row(s) in 2.2820 seconds
----
.Verifying a ColumnFamily's Data Block Encoding
----
hbase> describe 'test'
DESCRIPTION ENABLED
'test', {NAME => 'cf', DATA_BLOCK_ENCODING => 'FAST true
_DIFF', BLOOMFILTER => 'ROW', REPLICATION_SCOPE =>
'0', VERSIONS => '1', COMPRESSION => 'GZ', MIN_VERS
IONS => '0', TTL => 'FOREVER', KEEP_DELETED_CELLS =
> 'false', BLOCKSIZE => '65536', IN_MEMORY => 'fals
e', BLOCKCACHE => 'true'}
1 row(s) in 0.0650 seconds
----
:numbered:
ifdef::backend-docbook[]
[index]
== Index
// Generated automatically by the DocBook toolchain.
endif::backend-docbook[]

File diff suppressed because it is too large Load Diff

View File

@ -1,812 +0,0 @@
////
/**
*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
////
[[cp]]
= Apache HBase Coprocessors
:doctype: book
:numbered:
:toc: left
:icons: font
:experimental:
HBase Coprocessors are modeled after Google BigTable's coprocessor implementation
(http://research.google.com/people/jeff/SOCC2010-keynote-slides.pdf pages 41-42.).
The coprocessor framework provides mechanisms for running your custom code directly on
the RegionServers managing your data. Efforts are ongoing to bridge gaps between HBase's
implementation and BigTable's architecture. For more information see
link:https://issues.apache.org/jira/browse/HBASE-4047[HBASE-4047].
The information in this chapter is primarily sourced and heavily reused from the following
resources:
. Mingjie Lai's blog post
link:https://blogs.apache.org/hbase/entry/coprocessor_introduction[Coprocessor Introduction].
. Gaurav Bhardwaj's blog post
link:http://www.3pillarglobal.com/insights/hbase-coprocessors[The How To Of HBase Coprocessors].
[WARNING]
.Use Coprocessors At Your Own Risk
====
Coprocessors are an advanced feature of HBase and are intended to be used by system
developers only. Because coprocessor code runs directly on the RegionServer and has
direct access to your data, they introduce the risk of data corruption, man-in-the-middle
attacks, or other malicious data access. Currently, there is no mechanism to prevent
data corruption by coprocessors, though work is underway on
link:https://issues.apache.org/jira/browse/HBASE-4047[HBASE-4047].
+
In addition, there is no resource isolation, so a well-intentioned but misbehaving
coprocessor can severely degrade cluster performance and stability.
====
== Coprocessor Overview
In HBase, you fetch data using a `Get` or `Scan`, whereas in an RDBMS you use a SQL
query. In order to fetch only the relevant data, you filter it using a HBase
link:https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/filter/Filter.html[Filter]
, whereas in an RDBMS you use a `WHERE` predicate.
After fetching the data, you perform computations on it. This paradigm works well
for "small data" with a few thousand rows and several columns. However, when you scale
to billions of rows and millions of columns, moving large amounts of data across your
network will create bottlenecks at the network layer, and the client needs to be powerful
enough and have enough memory to handle the large amounts of data and the computations.
In addition, the client code can grow large and complex.
In this scenario, coprocessors might make sense. You can put the business computation
code into a coprocessor which runs on the RegionServer, in the same location as the
data, and returns the result to the client.
This is only one scenario where using coprocessors can provide benefit. Following
are some analogies which may help to explain some of the benefits of coprocessors.
[[cp_analogies]]
=== Coprocessor Analogies
Triggers and Stored Procedure::
An Observer coprocessor is similar to a trigger in a RDBMS in that it executes
your code either before or after a specific event (such as a `Get` or `Put`)
occurs. An endpoint coprocessor is similar to a stored procedure in a RDBMS
because it allows you to perform custom computations on the data on the
RegionServer itself, rather than on the client.
MapReduce::
MapReduce operates on the principle of moving the computation to the location of
the data. Coprocessors operate on the same principal.
AOP::
If you are familiar with Aspect Oriented Programming (AOP), you can think of a coprocessor
as applying advice by intercepting a request and then running some custom code,
before passing the request on to its final destination (or even changing the destination).
=== Coprocessor Implementation Overview
. Your class should implement one of the Coprocessor interfaces -
link:https://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/Coprocessor.html[Coprocessor],
link:https://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/coprocessor/RegionObserver.html[RegionObserver],
link:https://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/coprocessor/CoprocessorService.html[CoprocessorService] - to name a few.
. Load the coprocessor, either statically (from the configuration) or dynamically,
using HBase Shell. For more details see <<cp_loading,Loading Coprocessors>>.
. Call the coprocessor from your client-side code. HBase handles the coprocessor
transparently.
The framework API is provided in the
link:https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/coprocessor/package-summary.html[coprocessor]
package.
== Types of Coprocessors
=== Observer Coprocessors
Observer coprocessors are triggered either before or after a specific event occurs.
Observers that happen before an event use methods that start with a `pre` prefix,
such as link:https://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/coprocessor/RegionObserver.html#prePut-org.apache.hadoop.hbase.coprocessor.ObserverContext-org.apache.hadoop.hbase.client.Put-org.apache.hadoop.hbase.wal.WALEdit-org.apache.hadoop.hbase.client.Durability-[`prePut`]. Observers that happen just after an event override methods that start
with a `post` prefix, such as link:https://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/coprocessor/RegionObserver.html#postPut-org.apache.hadoop.hbase.coprocessor.ObserverContext-org.apache.hadoop.hbase.client.Put-org.apache.hadoop.hbase.wal.WALEdit-org.apache.hadoop.hbase.client.Durability-[`postPut`].
==== Use Cases for Observer Coprocessors
Security::
Before performing a `Get` or `Put` operation, you can check for permission using
`preGet` or `prePut` methods.
Referential Integrity::
HBase does not directly support the RDBMS concept of refential integrity, also known
as foreign keys. You can use a coprocessor to enforce such integrity. For instance,
if you have a business rule that every insert to the `users` table must be followed
by a corresponding entry in the `user_daily_attendance` table, you could implement
a coprocessor to use the `prePut` method on `user` to insert a record into `user_daily_attendance`.
Secondary Indexes::
You can use a coprocessor to maintain secondary indexes. For more information, see
link:https://cwiki.apache.org/confluence/display/HADOOP2/Hbase+SecondaryIndexing[SecondaryIndexing].
==== Types of Observer Coprocessor
RegionObserver::
A RegionObserver coprocessor allows you to observe events on a region, such as `Get`
and `Put` operations. See
link:https://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/coprocessor/RegionObserver.html[RegionObserver].
RegionServerObserver::
A RegionServerObserver allows you to observe events related to the RegionServer's
operation, such as starting, stopping, or performing merges, commits, or rollbacks.
See
link:https://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/coprocessor/RegionServerObserver.html[RegionServerObserver].
MasterObserver::
A MasterObserver allows you to observe events related to the HBase Master, such
as table creation, deletion, or schema modification. See
link:https://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/coprocessor/MasterObserver.html[MasterObserver].
WalObserver::
A WalObserver allows you to observe events related to writes to the Write-Ahead
Log (WAL). See
link:https://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/coprocessor/WALObserver.html[WALObserver].
<<cp_example,Examples>> provides working examples of observer coprocessors.
[[cpeps]]
=== Endpoint Coprocessor
Endpoint processors allow you to perform computation at the location of the data.
See <<cp_analogies, Coprocessor Analogy>>. An example is the need to calculate a running
average or summation for an entire table which spans hundreds of regions.
In contrast to observer coprocessors, where your code is run transparently, endpoint
coprocessors must be explicitly invoked using the
link:https://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/client/AsyncTable.html#coprocessorService-java.util.function.Function-org.apache.hadoop.hbase.client.ServiceCaller-byte:A-[CoprocessorService()]
method available in
link:https://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/client/AsyncTable.html[AsyncTable].
[WARNING]
.On using coprocessorService method with sync client
====
The coprocessorService method in link:https://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/client/Table.html[Table]
has been deprecated.
In link:https://issues.apache.org/jira/browse/HBASE-21512[HBASE-21512]
we reimplement the sync client based on the async client. The coprocessorService
method defined in `Table` interface directly references a method from protobuf's
`BlockingInterface`, which means we need to use a separate thread pool to execute
the method so we avoid blocking the async client(We want to avoid blocking calls in
our async implementation).
Since coprocessor is an advanced feature, we believe it is OK for coprocessor users to
instead switch over to use `AsyncTable`. There is a lightweight
link:https://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/client/Connection.html#toAsyncConnection--[toAsyncConnection]
method to get an `AsyncConnection` from `Connection` if needed.
====
Starting with HBase 0.96, endpoint coprocessors are implemented using Google Protocol
Buffers (protobuf). For more details on protobuf, see Google's
link:https://developers.google.com/protocol-buffers/docs/proto[Protocol Buffer Guide].
Endpoints Coprocessor written in version 0.94 are not compatible with version 0.96 or later.
See
link:https://issues.apache.org/jira/browse/HBASE-5448[HBASE-5448]). To upgrade your
HBase cluster from 0.94 or earlier to 0.96 or later, you need to reimplement your
coprocessor.
In HBase 2.x, we made use of a shaded version of protobuf 3.x, but kept the
protobuf for coprocessors on 2.5.0. In HBase 3.0.0, we removed all dependencies on
non-shaded protobuf so you need to reimplement your coprocessor to make use of the
shaded protobuf version provided in hbase-thirdparty. Please see
the <<protobuf,protobuf>> section for more details.
Coprocessor Endpoints should make no use of HBase internals and
only avail of public APIs; ideally a CPEP should depend on Interfaces
and data structures only. This is not always possible but beware
that doing so makes the Endpoint brittle, liable to breakage as HBase
internals evolve. HBase internal APIs annotated as private or evolving
do not have to respect semantic versioning rules or general java rules on
deprecation before removal. While generated protobuf files are
absent the hbase audience annotations -- they are created by the
protobuf protoc tool which knows nothing of how HBase works --
they should be consided `@InterfaceAudience.Private` so are liable to
change.
<<cp_example,Examples>> provides working examples of endpoint coprocessors.
[[cp_loading]]
== Loading Coprocessors
To make your coprocessor available to HBase, it must be _loaded_, either statically
(through the HBase configuration) or dynamically (using HBase Shell or the Java API).
=== Static Loading
Follow these steps to statically load your coprocessor. Keep in mind that you must
restart HBase to unload a coprocessor that has been loaded statically.
. Define the Coprocessor in _hbase-site.xml_, with a <property> element with a <name>
and a <value> sub-element. The <name> should be one of the following:
+
- `hbase.coprocessor.region.classes` for RegionObservers and Endpoints.
- `hbase.coprocessor.wal.classes` for WALObservers.
- `hbase.coprocessor.master.classes` for MasterObservers.
+
<value> must contain the fully-qualified class name of your coprocessor's implementation
class.
+
For example to load a Coprocessor (implemented in class SumEndPoint.java) you have to create
following entry in RegionServer's 'hbase-site.xml' file (generally located under 'conf' directory):
+
[source,xml]
----
<property>
<name>hbase.coprocessor.region.classes</name>
<value>org.myname.hbase.coprocessor.endpoint.SumEndPoint</value>
</property>
----
+
If multiple classes are specified for loading, the class names must be comma-separated.
The framework attempts to load all the configured classes using the default class loader.
Therefore, the jar file must reside on the server-side HBase classpath.
+
Coprocessors which are loaded in this way will be active on all regions of all tables.
These are also called system Coprocessor.
The first listed Coprocessors will be assigned the priority `Coprocessor.Priority.SYSTEM`.
Each subsequent coprocessor in the list will have its priority value incremented by one (which
reduces its priority, because priorities have the natural sort order of Integers).
+
These priority values can be manually overriden in hbase-site.xml. This can be useful if you
want to guarantee that a coprocessor will execute after another. For example, in the following
configuration `SumEndPoint` would be guaranteed to go last, except in the case of a tie with
another coprocessor:
+
[source,xml]
----
<property>
<name>hbase.coprocessor.region.classes</name>
<value>org.myname.hbase.coprocessor.endpoint.SumEndPoint|2147483647</value>
</property>
----
+
When calling out to registered observers, the framework executes their callbacks methods in the
sorted order of their priority. +
Ties are broken arbitrarily.
. Put your code on HBase's classpath. One easy way to do this is to drop the jar
(containing you code and all the dependencies) into the `lib/` directory in the
HBase installation.
. Restart HBase.
=== Static Unloading
. Delete the coprocessor's <property> element, including sub-elements, from `hbase-site.xml`.
. Restart HBase.
. Optionally, remove the coprocessor's JAR file from the classpath or HBase's `lib/`
directory.
=== Dynamic Loading
You can also load a coprocessor dynamically, without restarting HBase. This may seem
preferable to static loading, but dynamically loaded coprocessors are loaded on a
per-table basis, and are only available to the table for which they were loaded. For
this reason, dynamically loaded tables are sometimes called *Table Coprocessor*.
In addition, dynamically loading a coprocessor acts as a schema change on the table,
and the table must be taken offline to load the coprocessor.
There are three ways to dynamically load Coprocessor.
[NOTE]
.Assumptions
====
The below mentioned instructions makes the following assumptions:
* A JAR called `coprocessor.jar` contains the Coprocessor implementation along with all of its
dependencies.
* The JAR is available in HDFS in some location like
`hdfs://<namenode>:<port>/user/<hadoop-user>/coprocessor.jar`.
====
[[load_coprocessor_in_shell]]
==== Using HBase Shell
. Load the Coprocessor, using a command like the following:
+
[source]
----
hbase alter 'users', METHOD => 'table_att', 'Coprocessor'=>'hdfs://<namenode>:<port>/
user/<hadoop-user>/coprocessor.jar| org.myname.hbase.Coprocessor.RegionObserverExample|1073741823|
arg1=1,arg2=2'
----
+
The Coprocessor framework will try to read the class information from the coprocessor table
attribute value.
The value contains four pieces of information which are separated by the pipe (`|`) character.
+
* File path: The jar file containing the Coprocessor implementation must be in a location where
all region servers can read it. +
You could copy the file onto the local disk on each region server, but it is recommended to store
it in HDFS. +
https://issues.apache.org/jira/browse/HBASE-14548[HBASE-14548] allows a directory containing the jars
or some wildcards to be specified, such as: hdfs://<namenode>:<port>/user/<hadoop-user>/ or
hdfs://<namenode>:<port>/user/<hadoop-user>/*.jar. Please note that if a directory is specified,
all jar files(.jar) in the directory are added. It does not search for files in sub-directories.
Do not use a wildcard if you would like to specify a directory. This enhancement applies to the
usage via the JAVA API as well.
* Class name: The full class name of the Coprocessor.
* Priority: An integer. The framework will determine the execution sequence of all configured
observers registered at the same hook using priorities. This field can be left blank. In that
case the framework will assign a default priority value.
* Arguments (Optional): This field is passed to the Coprocessor implementation. This is optional.
. Verify that the coprocessor loaded:
+
----
hbase(main):04:0> describe 'users'
----
+
The coprocessor should be listed in the `TABLE_ATTRIBUTES`.
==== Using the Java API (all HBase versions)
The following Java code shows how to use the `setValue()` method of `HTableDescriptor`
to load a coprocessor on the `users` table.
[source,java]
----
TableName tableName = TableName.valueOf("users");
String path = "hdfs://<namenode>:<port>/user/<hadoop-user>/coprocessor.jar";
Configuration conf = HBaseConfiguration.create();
Connection connection = ConnectionFactory.createConnection(conf);
Admin admin = connection.getAdmin();
HTableDescriptor hTableDescriptor = new HTableDescriptor(tableName);
HColumnDescriptor columnFamily1 = new HColumnDescriptor("personalDet");
columnFamily1.setMaxVersions(3);
hTableDescriptor.addFamily(columnFamily1);
HColumnDescriptor columnFamily2 = new HColumnDescriptor("salaryDet");
columnFamily2.setMaxVersions(3);
hTableDescriptor.addFamily(columnFamily2);
hTableDescriptor.setValue("COPROCESSOR$1", path + "|"
+ RegionObserverExample.class.getCanonicalName() + "|"
+ Coprocessor.PRIORITY_USER);
admin.modifyTable(tableName, hTableDescriptor);
----
==== Using the Java API (HBase 0.96+ only)
In HBase 0.96 and newer, the `addCoprocessor()` method of `HTableDescriptor` provides
an easier way to load a coprocessor dynamically.
[source,java]
----
TableName tableName = TableName.valueOf("users");
Path path = new Path("hdfs://<namenode>:<port>/user/<hadoop-user>/coprocessor.jar");
Configuration conf = HBaseConfiguration.create();
Connection connection = ConnectionFactory.createConnection(conf);
Admin admin = connection.getAdmin();
HTableDescriptor hTableDescriptor = new HTableDescriptor(tableName);
HColumnDescriptor columnFamily1 = new HColumnDescriptor("personalDet");
columnFamily1.setMaxVersions(3);
hTableDescriptor.addFamily(columnFamily1);
HColumnDescriptor columnFamily2 = new HColumnDescriptor("salaryDet");
columnFamily2.setMaxVersions(3);
hTableDescriptor.addFamily(columnFamily2);
hTableDescriptor.addCoprocessor(RegionObserverExample.class.getCanonicalName(), path,
Coprocessor.PRIORITY_USER, null);
admin.modifyTable(tableName, hTableDescriptor);
----
WARNING: There is no guarantee that the framework will load a given Coprocessor successfully.
For example, the shell command neither guarantees a jar file exists at a particular location nor
verifies whether the given class is actually contained in the jar file.
=== Dynamic Unloading
==== Using HBase Shell
. Alter the table to remove the coprocessor.
+
[source]
----
hbase> alter 'users', METHOD => 'table_att_unset', NAME => 'coprocessor$1'
----
==== Using the Java API
Reload the table definition without setting the value of the coprocessor either by
using `setValue()` or `addCoprocessor()` methods. This will remove any coprocessor
attached to the table.
[source,java]
----
TableName tableName = TableName.valueOf("users");
String path = "hdfs://<namenode>:<port>/user/<hadoop-user>/coprocessor.jar";
Configuration conf = HBaseConfiguration.create();
Connection connection = ConnectionFactory.createConnection(conf);
Admin admin = connection.getAdmin();
HTableDescriptor hTableDescriptor = new HTableDescriptor(tableName);
HColumnDescriptor columnFamily1 = new HColumnDescriptor("personalDet");
columnFamily1.setMaxVersions(3);
hTableDescriptor.addFamily(columnFamily1);
HColumnDescriptor columnFamily2 = new HColumnDescriptor("salaryDet");
columnFamily2.setMaxVersions(3);
hTableDescriptor.addFamily(columnFamily2);
admin.modifyTable(tableName, hTableDescriptor);
----
In HBase 0.96 and newer, you can instead use the `removeCoprocessor()` method of the
`HTableDescriptor` class.
[[cp_example]]
== Examples
HBase ships examples for Observer Coprocessor.
A more detailed example is given below.
These examples assume a table called `users`, which has two column families `personalDet`
and `salaryDet`, containing personal and salary details. Below is the graphical representation
of the `users` table.
.Users Table
[width="100%",cols="7",options="header,footer"]
|====================
| 3+|personalDet 3+|salaryDet
|*rowkey* |*name* |*lastname* |*dob* |*gross* |*net* |*allowances*
|admin |Admin |Admin | 3+|
|cdickens |Charles |Dickens |02/07/1812 |10000 |8000 |2000
|jverne |Jules |Verne |02/08/1828 |12000 |9000 |3000
|====================
=== Observer Example
The following Observer coprocessor prevents the details of the user `admin` from being
returned in a `Get` or `Scan` of the `users` table.
. Write a class that implements the
link:https://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/coprocessor/RegionCoprocessor.html[RegionCoprocessor],
link:https://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/coprocessor/RegionObserver.html[RegionObserver]
class.
. Override the `preGetOp()` method (the `preGet()` method is deprecated) to check
whether the client has queried for the rowkey with value `admin`. If so, return an
empty result. Otherwise, process the request as normal.
. Put your code and dependencies in a JAR file.
. Place the JAR in HDFS where HBase can locate it.
. Load the Coprocessor.
. Write a simple program to test it.
Following are the implementation of the above steps:
[source,java]
----
public class RegionObserverExample implements RegionCoprocessor, RegionObserver {
private static final byte[] ADMIN = Bytes.toBytes("admin");
private static final byte[] COLUMN_FAMILY = Bytes.toBytes("details");
private static final byte[] COLUMN = Bytes.toBytes("Admin_det");
private static final byte[] VALUE = Bytes.toBytes("You can't see Admin details");
@Override
public Optional<RegionObserver> getRegionObserver() {
return Optional.of(this);
}
@Override
public void preGetOp(final ObserverContext<RegionCoprocessorEnvironment> e, final Get get, final List<Cell> results)
throws IOException {
if (Bytes.equals(get.getRow(),ADMIN)) {
Cell c = CellUtil.createCell(get.getRow(),COLUMN_FAMILY, COLUMN,
System.currentTimeMillis(), (byte)4, VALUE);
results.add(c);
e.bypass();
}
}
}
----
Overriding the `preGetOp()` will only work for `Get` operations. You also need to override
the `preScannerOpen()` method to filter the `admin` row from scan results.
[source,java]
----
@Override
public RegionScanner preScannerOpen(final ObserverContext<RegionCoprocessorEnvironment> e, final Scan scan,
final RegionScanner s) throws IOException {
Filter filter = new RowFilter(CompareOp.NOT_EQUAL, new BinaryComparator(ADMIN));
scan.setFilter(filter);
return s;
}
----
This method works but there is a _side effect_. If the client has used a filter in
its scan, that filter will be replaced by this filter. Instead, you can explicitly
remove any `admin` results from the scan:
[source,java]
----
@Override
public boolean postScannerNext(final ObserverContext<RegionCoprocessorEnvironment> e, final InternalScanner s,
final List<Result> results, final int limit, final boolean hasMore) throws IOException {
Result result = null;
Iterator<Result> iterator = results.iterator();
while (iterator.hasNext()) {
result = iterator.next();
if (Bytes.equals(result.getRow(), ROWKEY)) {
iterator.remove();
break;
}
}
return hasMore;
}
----
=== Endpoint Example
Still using the `users` table, this example implements a coprocessor to calculate
the sum of all employee salaries, using an endpoint coprocessor.
. Create a '.proto' file defining your service.
+
[source]
----
option java_package = "org.myname.hbase.coprocessor.autogenerated";
option java_outer_classname = "Sum";
option java_generic_services = true;
option java_generate_equals_and_hash = true;
option optimize_for = SPEED;
message SumRequest {
required string family = 1;
required string column = 2;
}
message SumResponse {
required int64 sum = 1 [default = 0];
}
service SumService {
rpc getSum(SumRequest)
returns (SumResponse);
}
----
. Execute the `protoc` command to generate the Java code from the above .proto' file.
+
[source]
----
$ mkdir src
$ protoc --java_out=src ./sum.proto
----
+
This will generate a class call `Sum.java`.
. Write a class that extends the generated service class, implement the `Coprocessor`
and `CoprocessorService` classes, and override the service method.
+
WARNING: If you load a coprocessor from `hbase-site.xml` and then load the same coprocessor
again using HBase Shell, it will be loaded a second time. The same class will
exist twice, and the second instance will have a higher ID (and thus a lower priority).
The effect is that the duplicate coprocessor is effectively ignored.
+
[source, java]
----
public class SumEndPoint extends Sum.SumService implements Coprocessor, CoprocessorService {
private RegionCoprocessorEnvironment env;
@Override
public Service getService() {
return this;
}
@Override
public void start(CoprocessorEnvironment env) throws IOException {
if (env instanceof RegionCoprocessorEnvironment) {
this.env = (RegionCoprocessorEnvironment)env;
} else {
throw new CoprocessorException("Must be loaded on a table region!");
}
}
@Override
public void stop(CoprocessorEnvironment env) throws IOException {
// do nothing
}
@Override
public void getSum(RpcController controller, Sum.SumRequest request, RpcCallback<Sum.SumResponse> done) {
Scan scan = new Scan();
scan.addFamily(Bytes.toBytes(request.getFamily()));
scan.addColumn(Bytes.toBytes(request.getFamily()), Bytes.toBytes(request.getColumn()));
Sum.SumResponse response = null;
InternalScanner scanner = null;
try {
scanner = env.getRegion().getScanner(scan);
List<Cell> results = new ArrayList<>();
boolean hasMore = false;
long sum = 0L;
do {
hasMore = scanner.next(results);
for (Cell cell : results) {
sum = sum + Bytes.toLong(CellUtil.cloneValue(cell));
}
results.clear();
} while (hasMore);
response = Sum.SumResponse.newBuilder().setSum(sum).build();
} catch (IOException ioe) {
ResponseConverter.setControllerException(controller, ioe);
} finally {
if (scanner != null) {
try {
scanner.close();
} catch (IOException ignored) {}
}
}
done.run(response);
}
}
----
+
[source, java]
----
Configuration conf = HBaseConfiguration.create();
Connection connection = ConnectionFactory.createConnection(conf);
TableName tableName = TableName.valueOf("users");
Table table = connection.getTable(tableName);
final Sum.SumRequest request = Sum.SumRequest.newBuilder().setFamily("salaryDet").setColumn("gross").build();
try {
Map<byte[], Long> results = table.coprocessorService(
Sum.SumService.class,
null, /* start key */
null, /* end key */
new Batch.Call<Sum.SumService, Long>() {
@Override
public Long call(Sum.SumService aggregate) throws IOException {
BlockingRpcCallback<Sum.SumResponse> rpcCallback = new BlockingRpcCallback<>();
aggregate.getSum(null, request, rpcCallback);
Sum.SumResponse response = rpcCallback.get();
return response.hasSum() ? response.getSum() : 0L;
}
}
);
for (Long sum : results.values()) {
System.out.println("Sum = " + sum);
}
} catch (ServiceException e) {
e.printStackTrace();
} catch (Throwable e) {
e.printStackTrace();
}
----
. Load the Coprocessor.
. Write a client code to call the Coprocessor.
== Guidelines For Deploying A Coprocessor
Bundling Coprocessors::
You can bundle all classes for a coprocessor into a
single JAR on the RegionServer's classpath, for easy deployment. Otherwise,
place all dependencies on the RegionServer's classpath so that they can be
loaded during RegionServer start-up. The classpath for a RegionServer is set
in the RegionServer's `hbase-env.sh` file.
Automating Deployment::
You can use a tool such as Puppet, Chef, or
Ansible to ship the JAR for the coprocessor to the required location on your
RegionServers' filesystems and restart each RegionServer, to automate
coprocessor deployment. Details for such set-ups are out of scope of this
document.
Updating a Coprocessor::
Deploying a new version of a given coprocessor is not as simple as disabling it,
replacing the JAR, and re-enabling the coprocessor. This is because you cannot
reload a class in a JVM unless you delete all the current references to it.
Since the current JVM has reference to the existing coprocessor, you must restart
the JVM, by restarting the RegionServer, in order to replace it. This behavior
is not expected to change.
Coprocessor Logging::
The Coprocessor framework does not provide an API for logging beyond standard Java
logging.
Coprocessor Configuration::
If you do not want to load coprocessors from the HBase Shell, you can add their configuration
properties to `hbase-site.xml`. In <<load_coprocessor_in_shell>>, two arguments are
set: `arg1=1,arg2=2`. These could have been added to `hbase-site.xml` as follows:
[source,xml]
----
<property>
<name>arg1</name>
<value>1</value>
</property>
<property>
<name>arg2</name>
<value>2</value>
</property>
----
Then you can read the configuration using code like the following:
[source,java]
----
Configuration conf = HBaseConfiguration.create();
Connection connection = ConnectionFactory.createConnection(conf);
TableName tableName = TableName.valueOf("users");
Table table = connection.getTable(tableName);
Get get = new Get(Bytes.toBytes("admin"));
Result result = table.get(get);
for (Cell c : result.rawCells()) {
System.out.println(Bytes.toString(CellUtil.cloneRow(c))
+ "==> " + Bytes.toString(CellUtil.cloneFamily(c))
+ "{" + Bytes.toString(CellUtil.cloneQualifier(c))
+ ":" + Bytes.toLong(CellUtil.cloneValue(c)) + "}");
}
Scan scan = new Scan();
ResultScanner scanner = table.getScanner(scan);
for (Result res : scanner) {
for (Cell c : res.rawCells()) {
System.out.println(Bytes.toString(CellUtil.cloneRow(c))
+ " ==> " + Bytes.toString(CellUtil.cloneFamily(c))
+ " {" + Bytes.toString(CellUtil.cloneQualifier(c))
+ ":" + Bytes.toLong(CellUtil.cloneValue(c))
+ "}");
}
}
----
== Restricting Coprocessor Usage
Restricting arbitrary user coprocessors can be a big concern in multitenant environments. HBase provides a continuum of options for ensuring only expected coprocessors are running:
- `hbase.coprocessor.enabled`: Enables or disables all coprocessors. This will limit the functionality of HBase, as disabling all coprocessors will disable some security providers. An example coproccessor so affected is `org.apache.hadoop.hbase.security.access.AccessController`.
* `hbase.coprocessor.user.enabled`: Enables or disables loading coprocessors on tables (i.e. user coprocessors).
* One can statically load coprocessors, and optionally tune their priorities, via the following tunables in `hbase-site.xml`:
** `hbase.coprocessor.regionserver.classes`: A comma-separated list of coprocessors that are loaded by region servers
** `hbase.coprocessor.region.classes`: A comma-separated list of RegionObserver and Endpoint coprocessors
** `hbase.coprocessor.user.region.classes`: A comma-separated list of coprocessors that are loaded by all regions
** `hbase.coprocessor.master.classes`: A comma-separated list of coprocessors that are loaded by the master (MasterObserver coprocessors)
** `hbase.coprocessor.wal.classes`: A comma-separated list of WALObserver coprocessors to load
* `hbase.coprocessor.abortonerror`: Whether to abort the daemon which has loaded the coprocessor if the coprocessor should error other than `IOError`. If this is set to false and an access controller coprocessor should have a fatal error the coprocessor will be circumvented, as such in secure installations this is advised to be `true`; however, one may override this on a per-table basis for user coprocessors, to ensure they do not abort their running region server and are instead unloaded on error.
* `hbase.coprocessor.region.whitelist.paths`: A comma separated list available for those loading `org.apache.hadoop.hbase.security.access.CoprocessorWhitelistMasterObserver` whereby one can use the following options to white-list paths from which coprocessors may be loaded.
** Coprocessors on the classpath are implicitly white-listed
** `*` to wildcard all coprocessor paths
** An entire filesystem (e.g. `hdfs://my-cluster/`)
** A wildcard path to be evaluated by link:https://commons.apache.org/proper/commons-io/javadocs/api-release/org/apache/commons/io/FilenameUtils.html[FilenameUtils.wildcardMatch]
** Note: Path can specify scheme or not (e.g. `file:///usr/hbase/lib/coprocessors` or for all filesystems `/usr/hbase/lib/coprocessors`)

View File

@ -1,618 +0,0 @@
////
/**
*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
////
[[datamodel]]
= Data Model
:doctype: book
:numbered:
:toc: left
:icons: font
:experimental:
In HBase, data is stored in tables, which have rows and columns.
This is a terminology overlap with relational databases (RDBMSs), but this is not a helpful analogy.
Instead, it can be helpful to think of an HBase table as a multi-dimensional map.
.HBase Data Model Terminology
Table::
An HBase table consists of multiple rows.
Row::
A row in HBase consists of a row key and one or more columns with values associated with them.
Rows are sorted alphabetically by the row key as they are stored.
For this reason, the design of the row key is very important.
The goal is to store data in such a way that related rows are near each other.
A common row key pattern is a website domain.
If your row keys are domains, you should probably store them in reverse (org.apache.www, org.apache.mail, org.apache.jira). This way, all of the Apache domains are near each other in the table, rather than being spread out based on the first letter of the subdomain.
Column::
A column in HBase consists of a column family and a column qualifier, which are delimited by a `:` (colon) character.
Column Family::
Column families physically colocate a set of columns and their values, often for performance reasons.
Each column family has a set of storage properties, such as whether its values should be cached in memory, how its data is compressed or its row keys are encoded, and others.
Each row in a table has the same column families, though a given row might not store anything in a given column family.
Column Qualifier::
A column qualifier is added to a column family to provide the index for a given piece of data.
Given a column family `content`, a column qualifier might be `content:html`, and another might be `content:pdf`.
Though column families are fixed at table creation, column qualifiers are mutable and may differ greatly between rows.
Cell::
A cell is a combination of row, column family, and column qualifier, and contains a value and a timestamp, which represents the value's version.
Timestamp::
A timestamp is written alongside each value, and is the identifier for a given version of a value.
By default, the timestamp represents the time on the RegionServer when the data was written, but you can specify a different timestamp value when you put data into the cell.
[[conceptual.view]]
== Conceptual View
You can read a very understandable explanation of the HBase data model in the blog post link:https://dzone.com/articles/understanding-hbase-and-bigtab[Understanding HBase and BigTable] by Jim R. Wilson.
Another good explanation is available in the PDF link:http://0b4af6cdc2f0c5998459-c0245c5c937c5dedcca3f1764ecc9b2f.r43.cf2.rackcdn.com/9353-login1210_khurana.pdf[Introduction to Basic Schema Design] by Amandeep Khurana.
It may help to read different perspectives to get a solid understanding of HBase schema design.
The linked articles cover the same ground as the information in this section.
The following example is a slightly modified form of the one on page 2 of the link:http://research.google.com/archive/bigtable.html[BigTable] paper.
There is a table called `webtable` that contains two rows (`com.cnn.www` and `com.example.www`) and three column families named `contents`, `anchor`, and `people`.
In this example, for the first row (`com.cnn.www`), `anchor` contains two columns (`anchor:cssnsi.com`, `anchor:my.look.ca`) and `contents` contains one column (`contents:html`). This example contains 5 versions of the row with the row key `com.cnn.www`, and one version of the row with the row key `com.example.www`.
The `contents:html` column qualifier contains the entire HTML of a given website.
Qualifiers of the `anchor` column family each contain the external site which links to the site represented by the row, along with the text it used in the anchor of its link.
The `people` column family represents people associated with the site.
.Column Names
[NOTE]
====
By convention, a column name is made of its column family prefix and a _qualifier_.
For example, the column _contents:html_ is made up of the column family `contents` and the `html` qualifier.
The colon character (`:`) delimits the column family from the column family _qualifier_.
====
.Table `webtable`
[cols="1,1,1,1,1", frame="all", options="header"]
|===
|Row Key |Time Stamp |ColumnFamily `contents` |ColumnFamily `anchor`|ColumnFamily `people`
|"com.cnn.www" |t9 | |anchor:cnnsi.com = "CNN" |
|"com.cnn.www" |t8 | |anchor:my.look.ca = "CNN.com" |
|"com.cnn.www" |t6 | contents:html = "<html>..." | |
|"com.cnn.www" |t5 | contents:html = "<html>..." | |
|"com.cnn.www" |t3 | contents:html = "<html>..." | |
|"com.example.www"| t5 | contents:html = "<html>..." | | people:author = "John Doe"
|===
Cells in this table that appear to be empty do not take space, or in fact exist, in HBase.
This is what makes HBase "sparse." A tabular view is not the only possible way to look at data in HBase, or even the most accurate.
The following represents the same information as a multi-dimensional map.
This is only a mock-up for illustrative purposes and may not be strictly accurate.
[source,json]
----
{
"com.cnn.www": {
contents: {
t6: contents:html: "<html>..."
t5: contents:html: "<html>..."
t3: contents:html: "<html>..."
}
anchor: {
t9: anchor:cnnsi.com = "CNN"
t8: anchor:my.look.ca = "CNN.com"
}
people: {}
}
"com.example.www": {
contents: {
t5: contents:html: "<html>..."
}
anchor: {}
people: {
t5: people:author: "John Doe"
}
}
}
----
[[physical.view]]
== Physical View
Although at a conceptual level tables may be viewed as a sparse set of rows, they are physically stored by column family.
A new column qualifier (column_family:column_qualifier) can be added to an existing column family at any time.
.ColumnFamily `anchor`
[cols="1,1,1", frame="all", options="header"]
|===
|Row Key | Time Stamp |Column Family `anchor`
|"com.cnn.www" |t9 |`anchor:cnnsi.com = "CNN"`
|"com.cnn.www" |t8 |`anchor:my.look.ca = "CNN.com"`
|===
.ColumnFamily `contents`
[cols="1,1,1", frame="all", options="header"]
|===
|Row Key |Time Stamp |ColumnFamily `contents:`
|"com.cnn.www" |t6 |contents:html = "<html>..."
|"com.cnn.www" |t5 |contents:html = "<html>..."
|"com.cnn.www" |t3 |contents:html = "<html>..."
|===
The empty cells shown in the conceptual view are not stored at all.
Thus a request for the value of the `contents:html` column at time stamp `t8` would return no value.
Similarly, a request for an `anchor:my.look.ca` value at time stamp `t9` would return no value.
However, if no timestamp is supplied, the most recent value for a particular column would be returned.
Given multiple versions, the most recent is also the first one found, since timestamps are stored in descending order.
Thus a request for the values of all columns in the row `com.cnn.www` if no timestamp is specified would be: the value of `contents:html` from timestamp `t6`, the value of `anchor:cnnsi.com` from timestamp `t9`, the value of `anchor:my.look.ca` from timestamp `t8`.
For more information about the internals of how Apache HBase stores data, see <<regions.arch,regions.arch>>.
== Namespace
A namespace is a logical grouping of tables analogous to a database in relation database systems.
This abstraction lays the groundwork for upcoming multi-tenancy related features:
* Quota Management (link:https://issues.apache.org/jira/browse/HBASE-8410[HBASE-8410]) - Restrict the amount of resources (i.e. regions, tables) a namespace can consume.
* Namespace Security Administration (link:https://issues.apache.org/jira/browse/HBASE-9206[HBASE-9206]) - Provide another level of security administration for tenants.
* Region server groups (link:https://issues.apache.org/jira/browse/HBASE-6721[HBASE-6721]) - A namespace/table can be pinned onto a subset of RegionServers thus guaranteeing a coarse level of isolation.
[[namespace_creation]]
=== Namespace management
A namespace can be created, removed or altered.
Namespace membership is determined during table creation by specifying a fully-qualified table name of the form:
[source,xml]
----
<table namespace>:<table qualifier>
----
.Examples
====
[source,bourne]
----
#Create a namespace
create_namespace 'my_ns'
----
[source,bourne]
----
#create my_table in my_ns namespace
create 'my_ns:my_table', 'fam'
----
[source,bourne]
----
#drop namespace
drop_namespace 'my_ns'
----
[source,bourne]
----
#alter namespace
alter_namespace 'my_ns', {METHOD => 'set', 'PROPERTY_NAME' => 'PROPERTY_VALUE'}
----
====
[[namespace_special]]
=== Predefined namespaces
There are two predefined special namespaces:
* hbase - system namespace, used to contain HBase internal tables
* default - tables with no explicit specified namespace will automatically fall into this namespace
.Examples
====
[source,bourne]
----
#namespace=foo and table qualifier=bar
create 'foo:bar', 'fam'
#namespace=default and table qualifier=bar
create 'bar', 'fam'
----
====
== Table
Tables are declared up front at schema definition time.
== Row
Row keys are uninterpreted bytes.
Rows are lexicographically sorted with the lowest order appearing first in a table.
The empty byte array is used to denote both the start and end of a tables' namespace.
[[columnfamily]]
== Column Family
Columns in Apache HBase are grouped into _column families_.
All column members of a column family have the same prefix.
For example, the columns _courses:history_ and _courses:math_ are both members of the _courses_ column family.
The colon character (`:`) delimits the column family from the column family qualifier.
The column family prefix must be composed of _printable_ characters.
The qualifying tail, the column family _qualifier_, can be made of any arbitrary bytes.
Column families must be declared up front at schema definition time whereas columns do not need to be defined at schema time but can be conjured on the fly while the table is up and running.
Physically, all column family members are stored together on the filesystem.
Because tunings and storage specifications are done at the column family level, it is advised that all column family members have the same general access pattern and size characteristics.
== Cells
A _{row, column, version}_ tuple exactly specifies a `cell` in HBase.
Cell content is uninterpreted bytes
== Data Model Operations
The four primary data model operations are Get, Put, Scan, and Delete.
Operations are applied via link:https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Table.html[Table] instances.
=== Get
link:https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Get.html[Get] returns attributes for a specified row.
Gets are executed via link:https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Table.html#get-org.apache.hadoop.hbase.client.Get-[Table.get]
=== Put
link:https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Put.html[Put] either adds new rows to a table (if the key is new) or can update existing rows (if the key already exists). Puts are executed via link:https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Table.html#put-org.apache.hadoop.hbase.client.Put-[Table.put] (non-writeBuffer) or link:https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Table.html#batch-java.util.List-java.lang.Object:A-[Table.batch] (non-writeBuffer)
[[scan]]
=== Scans
link:https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Scan.html[Scan] allow iteration over multiple rows for specified attributes.
The following is an example of a Scan on a Table instance.
Assume that a table is populated with rows with keys "row1", "row2", "row3", and then another set of rows with the keys "abc1", "abc2", and "abc3". The following example shows how to set a Scan instance to return the rows beginning with "row".
[source,java]
----
public static final byte[] CF = "cf".getBytes();
public static final byte[] ATTR = "attr".getBytes();
...
Table table = ... // instantiate a Table instance
Scan scan = new Scan();
scan.addColumn(CF, ATTR);
scan.setStartStopRowForPrefixScan(Bytes.toBytes("row"));
ResultScanner rs = table.getScanner(scan);
try {
for (Result r = rs.next(); r != null; r = rs.next()) {
// process result...
}
} finally {
rs.close(); // always close the ResultScanner!
}
----
Note that generally the easiest way to specify a specific stop point for a scan is by using the link:https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/filter/InclusiveStopFilter.html[InclusiveStopFilter] class.
=== Delete
link:https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Delete.html[Delete] removes a row from a table.
Deletes are executed via link:https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Table.html#delete-org.apache.hadoop.hbase.client.Delete-[Table.delete].
HBase does not modify data in place, and so deletes are handled by creating new markers called _tombstones_.
These tombstones, along with the dead values, are cleaned up on major compactions.
See <<version.delete,version.delete>> for more information on deleting versions of columns, and see <<compaction,compaction>> for more information on compactions.
[[versions]]
== Versions
A _{row, column, version}_ tuple exactly specifies a `cell` in HBase.
It's possible to have an unbounded number of cells where the row and column are the same but the cell address differs only in its version dimension.
While rows and column keys are expressed as bytes, the version is specified using a long integer.
Typically this long contains time instances such as those returned by `java.util.Date.getTime()` or `System.currentTimeMillis()`, that is: [quote]_the difference, measured in milliseconds, between the current time and midnight, January 1, 1970 UTC_.
The HBase version dimension is stored in decreasing order, so that when reading from a store file, the most recent values are found first.
There is a lot of confusion over the semantics of `cell` versions, in HBase.
In particular:
* If multiple writes to a cell have the same version, only the last written is fetchable.
* It is OK to write cells in a non-increasing version order.
Below we describe how the version dimension in HBase currently works.
See link:https://issues.apache.org/jira/browse/HBASE-2406[HBASE-2406] for discussion of HBase versions. link:https://www.ngdata.com/bending-time-in-hbase/[Bending time in HBase] makes for a good read on the version, or time, dimension in HBase.
It has more detail on versioning than is provided here.
As of this writing, the limitation _Overwriting values at existing timestamps_ mentioned in the article no longer holds in HBase.
This section is basically a synopsis of this article by Bruno Dumon.
[[specify.number.of.versions]]
=== Specifying the Number of Versions to Store
The maximum number of versions to store for a given column is part of the column schema and is specified at table creation, or via an `alter` command, via `HColumnDescriptor.DEFAULT_VERSIONS`.
Prior to HBase 0.96, the default number of versions kept was `3`, but in 0.96 and newer has been changed to `1`.
.Modify the Maximum Number of Versions for a Column Family
====
This example uses HBase Shell to keep a maximum of 5 versions of all columns in column family `f1`.
You could also use link:https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/HColumnDescriptor.html[HColumnDescriptor].
----
hbase> alter t1, NAME => f1, VERSIONS => 5
----
====
.Modify the Minimum Number of Versions for a Column Family
====
You can also specify the minimum number of versions to store per column family.
By default, this is set to 0, which means the feature is disabled.
The following example sets the minimum number of versions on all columns in column family `f1` to `2`, via HBase Shell.
You could also use link:https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/HColumnDescriptor.html[HColumnDescriptor].
----
hbase> alter t1, NAME => f1, MIN_VERSIONS => 2
----
====
Starting with HBase 0.98.2, you can specify a global default for the maximum number of versions kept for all newly-created columns, by setting `hbase.column.max.version` in _hbase-site.xml_.
See <<hbase.column.max.version,hbase.column.max.version>>.
[[versions.ops]]
=== Versions and HBase Operations
In this section we look at the behavior of the version dimension for each of the core HBase operations.
==== Get/Scan
Gets are implemented on top of Scans.
The below discussion of link:https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Get.html[Get] applies equally to link:https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Scan.html[Scans].
By default, i.e. if you specify no explicit version, when doing a `get`, the cell whose version has the largest value is returned (which may or may not be the latest one written, see later). The default behavior can be modified in the following ways:
* to return more than one version, see link:https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Get.html#setMaxVersions--[Get.setMaxVersions()]
* to return versions other than the latest, see link:https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Get.html#setTimeRange-long-long-[Get.setTimeRange()]
+
To retrieve the latest version that is less than or equal to a given value, thus giving the 'latest' state of the record at a certain point in time, just use a range from 0 to the desired version and set the max versions to 1.
==== Default Get Example
The following Get will only retrieve the current version of the row
[source,java]
----
public static final byte[] CF = "cf".getBytes();
public static final byte[] ATTR = "attr".getBytes();
...
Get get = new Get(Bytes.toBytes("row1"));
Result r = table.get(get);
byte[] b = r.getValue(CF, ATTR); // returns current version of value
----
==== Versioned Get Example
The following Get will return the last 3 versions of the row.
[source,java]
----
public static final byte[] CF = "cf".getBytes();
public static final byte[] ATTR = "attr".getBytes();
...
Get get = new Get(Bytes.toBytes("row1"));
get.setMaxVersions(3); // will return last 3 versions of row
Result r = table.get(get);
byte[] b = r.getValue(CF, ATTR); // returns current version of value
List<Cell> cells = r.getColumnCells(CF, ATTR); // returns all versions of this column
----
==== Put
Doing a put always creates a new version of a `cell`, at a certain timestamp.
By default the system uses the server's `currentTimeMillis`, but you can specify the version (= the long integer) yourself, on a per-column level.
This means you could assign a time in the past or the future, or use the long value for non-time purposes.
To overwrite an existing value, do a put at exactly the same row, column, and version as that of the cell you want to overwrite.
===== Implicit Version Example
The following Put will be implicitly versioned by HBase with the current time.
[source,java]
----
public static final byte[] CF = "cf".getBytes();
public static final byte[] ATTR = "attr".getBytes();
...
Put put = new Put(Bytes.toBytes(row));
put.add(CF, ATTR, Bytes.toBytes( data));
table.put(put);
----
===== Explicit Version Example
The following Put has the version timestamp explicitly set.
[source,java]
----
public static final byte[] CF = "cf".getBytes();
public static final byte[] ATTR = "attr".getBytes();
...
Put put = new Put( Bytes.toBytes(row));
long explicitTimeInMs = 555; // just an example
put.add(CF, ATTR, explicitTimeInMs, Bytes.toBytes(data));
table.put(put);
----
Caution: the version timestamp is used internally by HBase for things like time-to-live calculations.
It's usually best to avoid setting this timestamp yourself.
Prefer using a separate timestamp attribute of the row, or have the timestamp as a part of the row key, or both.
===== Cell Version Example
The following Put uses a method getCellBuilder() to get a CellBuilder instance
that already has relevant Type and Row set.
[source,java]
----
public static final byte[] CF = "cf".getBytes();
public static final byte[] ATTR = "attr".getBytes();
...
Put put = new Put(Bytes.toBytes(row));
put.add(put.getCellBuilder().setQualifier(ATTR)
.setFamily(CF)
.setValue(Bytes.toBytes(data))
.build());
table.put(put);
----
[[version.delete]]
==== Delete
There are three different types of internal delete markers.
See Lars Hofhansl's blog for discussion of his attempt adding another, link:http://hadoop-hbase.blogspot.com/2012/01/scanning-in-hbase.html[Scanning in HBase: Prefix Delete Marker].
* Delete: for a specific version of a column.
* Delete column: for all versions of a column.
* Delete family: for all columns of a particular ColumnFamily
When deleting an entire row, HBase will internally create a tombstone for each ColumnFamily (i.e., not each individual column).
Deletes work by creating _tombstone_ markers.
For example, let's suppose we want to delete a row.
For this you can specify a version, or else by default the `currentTimeMillis` is used.
What this means is _delete all cells where the version is less than or equal to this version_.
HBase never modifies data in place, so for example a delete will not immediately delete (or mark as deleted) the entries in the storage file that correspond to the delete condition.
Rather, a so-called _tombstone_ is written, which will mask the deleted values.
When HBase does a major compaction, the tombstones are processed to actually remove the dead values, together with the tombstones themselves.
If the version you specified when deleting a row is larger than the version of any value in the row, then you can consider the complete row to be deleted.
For an informative discussion on how deletes and versioning interact, see the thread link:http://comments.gmane.org/gmane.comp.java.hadoop.hbase.user/28421[Put w/timestamp -> Deleteall -> Put w/ timestamp fails] up on the user mailing list.
Also see <<keyvalue,keyvalue>> for more information on the internal KeyValue format.
Delete markers are purged during the next major compaction of the store, unless the `KEEP_DELETED_CELLS` option is set in the column family (See <<cf.keep.deleted>>).
To keep the deletes for a configurable amount of time, you can set the delete TTL via the +hbase.hstore.time.to.purge.deletes+ property in _hbase-site.xml_.
If `hbase.hstore.time.to.purge.deletes` is not set, or set to 0, all delete markers, including those with timestamps in the future, are purged during the next major compaction.
Otherwise, a delete marker with a timestamp in the future is kept until the major compaction which occurs after the time represented by the marker's timestamp plus the value of `hbase.hstore.time.to.purge.deletes`, in milliseconds.
NOTE: This behavior represents a fix for an unexpected change that was introduced in HBase 0.94, and was fixed in link:https://issues.apache.org/jira/browse/HBASE-10118[HBASE-10118].
The change has been backported to HBase 0.94 and newer branches.
[[new.version.behavior]]
=== Optional New Version and Delete behavior in HBase-2.0.0
In `hbase-2.0.0`, the operator can specify an alternate version and
delete treatment by setting the column descriptor property
`NEW_VERSION_BEHAVIOR` to true (To set a property on a column family
descriptor, you must first disable the table and then alter the
column family descriptor; see <<cf.keep.deleted>> for an example
of editing an attribute on a column family descriptor).
The 'new version behavior', undoes the limitations listed below
whereby a `Delete` ALWAYS overshadows a `Put` if at the same
location -- i.e. same row, column family, qualifier and timestamp
-- regardless of which arrived first. Version accounting is also
changed as deleted versions are considered toward total version count.
This is done to ensure results are not changed should a major
compaction intercede. See `HBASE-15968` and linked issues for
discussion.
Running with this new configuration currently costs; we factor
the Cell MVCC on every compare so we burn more CPU. The slow
down will depend. In testing we've seen between 0% and 25%
degradation.
If replicating, it is advised that you run with the new
serial replication feature (See `HBASE-9465`; the serial
replication feature did NOT make it into `hbase-2.0.0` but
should arrive in a subsequent hbase-2.x release) as now
the order in which Mutations arrive is a factor.
=== Current Limitations
The below limitations are addressed in hbase-2.0.0. See
the section above, <<new.version.behavior>>.
==== Deletes mask Puts
Deletes mask puts, even puts that happened after the delete was entered.
See link:https://issues.apache.org/jira/browse/HBASE-2256[HBASE-2256].
Remember that a delete writes a tombstone, which only disappears after then next major compaction has run.
Suppose you do a delete of everything <= T.
After this you do a new put with a timestamp <= T.
This put, even if it happened after the delete, will be masked by the delete tombstone.
Performing the put will not fail, but when you do a get you will notice the put did have no effect.
It will start working again after the major compaction has run.
These issues should not be a problem if you use always-increasing versions for new puts to a row.
But they can occur even if you do not care about time: just do delete and put immediately after each other, and there is some chance they happen within the same millisecond.
[[major.compactions.change.query.results]]
==== Major compactions change query results
_...create three cell versions at t1, t2 and t3, with a maximum-versions
setting of 2. So when getting all versions, only the values at t2 and t3 will be
returned. But if you delete the version at t2 or t3, the one at t1 will appear again.
Obviously, once a major compaction has run, such behavior will not be the case
anymore..._ (See _Garbage Collection_ in link:https://www.ngdata.com/bending-time-in-hbase/[Bending time in HBase].)
[[dm.sort]]
== Sort Order
All data model operations HBase return data in sorted order.
First by row, then by ColumnFamily, followed by column qualifier, and finally timestamp (sorted in reverse, so newest records are returned first).
[[dm.column.metadata]]
== Column Metadata
There is no store of column metadata outside of the internal KeyValue instances for a ColumnFamily.
Thus, while HBase can support not only a wide number of columns per row, but a heterogeneous set of columns between rows as well, it is your responsibility to keep track of the column names.
The only way to get a complete set of columns that exist for a ColumnFamily is to process all the rows.
For more information about how HBase stores data internally, see <<keyvalue,keyvalue>>.
[[joins]]
== Joins
Whether HBase supports joins is a common question on the dist-list, and there is a simple answer: it doesn't, at not least in the way that RDBMS' support them (e.g., with equi-joins or outer-joins in SQL). As has been illustrated in this chapter, the read data model operations in HBase are Get and Scan.
However, that doesn't mean that equivalent join functionality can't be supported in your application, but you have to do it yourself.
The two primary strategies are either denormalizing the data upon writing to HBase, or to have lookup tables and do the join between HBase tables in your application or MapReduce code (and as RDBMS' demonstrate, there are several strategies for this depending on the size of the tables, e.g., nested loops vs.
hash-joins). So which is the best approach? It depends on what you are trying to do, and as such there isn't a single answer that works for every use case.
== ACID
See link:/acid-semantics.html[ACID Semantics].
Lars Hofhansl has also written a note on link:http://hadoop-hbase.blogspot.com/2012/03/acid-in-hbase.html[ACID in HBase].
ifdef::backend-docbook[]
[index]
== Index
// Generated automatically by the DocBook toolchain.
endif::backend-docbook[]

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@ -1,141 +0,0 @@
////
/**
*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
////
[appendix]
[[faq]]
== FAQ
:doctype: book
:numbered:
:toc: left
:icons: font
:experimental:
=== General
When should I use HBase?::
See <<arch.overview>> in the Architecture chapter.
Does HBase support SQL?::
Not really. SQL-ish support for HBase via link:https://hive.apache.org/[Hive] is in development, however Hive is based on MapReduce which is not generally suitable for low-latency requests. See the <<datamodel>> section for examples on the HBase client.
How can I find examples of NoSQL/HBase?::
See the link to the BigTable paper in <<other.info>>, as well as the other papers.
What is the history of HBase?::
See <<hbase.history,hbase.history>>.
Why are the cells above 10MB not recommended for HBase?::
Large cells don't fit well into HBase's approach to buffering data. First, the large cells bypass the MemStoreLAB when they are written. Then, they cannot be cached in the L2 block cache during read operations. Instead, HBase has to allocate on-heap memory for them each time. This can have a significant impact on the garbage collector within the RegionServer process.
=== Upgrading
How do I upgrade Maven-managed projects from HBase 0.94 to HBase 0.96+?::
In HBase 0.96, the project moved to a modular structure. Adjust your project's dependencies to rely upon the `hbase-client` module or another module as appropriate, rather than a single JAR. You can model your Maven dependency after one of the following, depending on your targeted version of HBase. See Section 3.5, “Upgrading from 0.94.x to 0.96.x” or Section 3.3, “Upgrading from 0.96.x to 0.98.x” for more information.
+
.Maven Dependency for HBase 0.98
[source,xml]
----
<dependency>
<groupId>org.apache.hbase</groupId>
<artifactId>hbase-client</artifactId>
<version>0.98.5-hadoop2</version>
</dependency>
----
+
.Maven Dependency for HBase 0.96
[source,xml]
----
<dependency>
<groupId>org.apache.hbase</groupId>
<artifactId>hbase-client</artifactId>
<version>0.96.2-hadoop2</version>
</dependency>
----
+
.Maven Dependency for HBase 0.94
[source,xml]
----
<dependency>
<groupId>org.apache.hbase</groupId>
<artifactId>hbase</artifactId>
<version>0.94.3</version>
</dependency>
----
=== Architecture
How does HBase handle Region-RegionServer assignment and locality?::
See <<regions.arch>>.
=== Configuration
How can I get started with my first cluster?::
See <<quickstart>>.
Where can I learn about the rest of the configuration options?::
See <<configuration>>.
=== Schema Design / Data Access
How should I design my schema in HBase?::
See <<datamodel>> and <<schema>>.
How can I store (fill in the blank) in HBase?::
See <<supported.datatypes>>.
How can I handle secondary indexes in HBase?::
See <<secondary.indexes>>.
Can I change a table's rowkeys?::
This is a very common question. You can't. See <<changing.rowkeys>>.
What APIs does HBase support?::
See <<datamodel>>, <<architecture.client>>, and <<external_apis>>.
=== MapReduce
How can I use MapReduce with HBase?::
See <<mapreduce>>.
=== Performance and Troubleshooting
How can I improve HBase cluster performance?::
See <<performance>>.
How can I troubleshoot my HBase cluster?::
See <<trouble>>.
=== Amazon EC2
I am running HBase on Amazon EC2 and...::
EC2 issues are a special case. See <<trouble.ec2>> and <<perf.ec2>>.
=== Operations
How do I manage my HBase cluster?::
See <<ops_mgt>>.
How do I back up my HBase cluster?::
See <<ops.backup>>.
=== HBase in Action
Where can I find interesting videos and presentations on HBase?::
See <<other.info>>.
:numbered:

View File

@ -1,598 +0,0 @@
////
/**
*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
////
[[getting_started]]
= Getting Started
:doctype: book
:numbered:
:toc: left
:icons: font
:experimental:
== Introduction
<<quickstart,Quickstart>> will get you up and running on a single-node, standalone instance of HBase.
[[quickstart]]
== Quick Start - Standalone HBase
This section describes the setup of a single-node standalone HBase.
A _standalone_ instance has all HBase daemons -- the Master, RegionServers,
and ZooKeeper -- running in a single JVM persisting to the local filesystem.
It is our most basic deploy profile. We will show you how
to create a table in HBase using the `hbase shell` CLI,
insert rows into the table, perform put and scan operations against the
table, enable or disable the table, and start and stop HBase.
Apart from downloading HBase, this procedure should take less than 10 minutes.
=== JDK Version Requirements
HBase requires that a JDK be installed.
See <<java,Java>> for information about supported JDK versions.
=== Get Started with HBase
.Procedure: Download, Configure, and Start HBase in Standalone Mode
. Choose a download site from this list of link:https://www.apache.org/dyn/closer.lua/hbase/[Apache Download Mirrors].
Click on the suggested top link.
This will take you to a mirror of _HBase Releases_.
Click on the folder named _stable_ and then download the binary file that ends in _.tar.gz_ to your local filesystem.
Do not download the file ending in _src.tar.gz_ for now.
. Extract the downloaded file, and change to the newly-created directory.
+
[source,subs="attributes"]
----
$ tar xzvf hbase-{Version}-bin.tar.gz
$ cd hbase-{Version}/
----
. You must set the `JAVA_HOME` environment variable before starting HBase.
To make this easier, HBase lets you set it within the _conf/hbase-env.sh_ file. You must locate where Java is
installed on your machine, and one way to find this is by using the _whereis java_ command. Once you have the location,
edit the _conf/hbase-env.sh_ file and uncomment the line starting with _#export JAVA_HOME=_, and then set it to your Java installation path.
+
.Example extract from _hbase-env.sh_ where _JAVA_HOME_ is set
# Set environment variables here.
# The java implementation to use.
export JAVA_HOME=/usr/jdk64/jdk1.8.0_112
+
. The _bin/start-hbase.sh_ script is provided as a convenient way to start HBase.
Issue the command, and if all goes well, a message is logged to standard output showing that HBase started successfully.
You can use the `jps` command to verify that you have one running process called `HMaster`.
In standalone mode HBase runs all daemons within this single JVM, i.e.
the HMaster, a single HRegionServer, and the ZooKeeper daemon.
Go to _http://localhost:16010_ to view the HBase Web UI.
[[shell_exercises]]
.Procedure: Use HBase For the First Time
. Connect to HBase.
+
Connect to your running instance of HBase using the `hbase shell` command, located in the [path]_bin/_ directory of your HBase install.
In this example, some usage and version information that is printed when you start HBase Shell has been omitted.
The HBase Shell prompt ends with a `>` character.
+
----
$ ./bin/hbase shell
hbase(main):001:0>
----
. Display HBase Shell Help Text.
+
Type `help` and press Enter, to display some basic usage information for HBase Shell, as well as several example commands.
Notice that table names, rows, columns all must be enclosed in quote characters.
. Create a table.
+
Use the `create` command to create a new table.
You must specify the table name and the ColumnFamily name.
+
----
hbase(main):001:0> create 'test', 'cf'
0 row(s) in 0.4170 seconds
=> Hbase::Table - test
----
. List Information About your Table
+
Use the `list` command to confirm your table exists
+
----
hbase(main):002:0> list 'test'
TABLE
test
1 row(s) in 0.0180 seconds
=> ["test"]
----
+
Now use the `describe` command to see details, including configuration defaults
+
----
hbase(main):003:0> describe 'test'
Table test is ENABLED
test
COLUMN FAMILIES DESCRIPTION
{NAME => 'cf', VERSIONS => '1', EVICT_BLOCKS_ON_CLOSE => 'false', NEW_VERSION_BEHAVIOR => 'false', KEEP_DELETED_CELLS => 'FALSE', CACHE_DATA_ON_WRITE =>
'false', DATA_BLOCK_ENCODING => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', REPLICATION_SCOPE => '0', BLOOMFILTER => 'ROW', CACHE_INDEX_ON_WRITE => 'f
alse', IN_MEMORY => 'false', CACHE_BLOOMS_ON_WRITE => 'false', PREFETCH_BLOCKS_ON_OPEN => 'false', COMPRESSION => 'NONE', BLOCKCACHE => 'true', BLOCKSIZE
=> '65536'}
1 row(s)
Took 0.9998 seconds
----
. Put data into your table.
+
To put data into your table, use the `put` command.
+
----
hbase(main):003:0> put 'test', 'row1', 'cf:a', 'value1'
0 row(s) in 0.0850 seconds
hbase(main):004:0> put 'test', 'row2', 'cf:b', 'value2'
0 row(s) in 0.0110 seconds
hbase(main):005:0> put 'test', 'row3', 'cf:c', 'value3'
0 row(s) in 0.0100 seconds
----
+
Here, we insert three values, one at a time.
The first insert is at `row1`, column `cf:a`, with a value of `value1`.
Columns in HBase are comprised of a column family prefix, `cf` in this example, followed by a colon and then a column qualifier suffix, `a` in this case.
. Scan the table for all data at once.
+
One of the ways to get data from HBase is to scan.
Use the `scan` command to scan the table for data.
You can limit your scan, but for now, all data is fetched.
+
----
hbase(main):006:0> scan 'test'
ROW COLUMN+CELL
row1 column=cf:a, timestamp=1421762485768, value=value1
row2 column=cf:b, timestamp=1421762491785, value=value2
row3 column=cf:c, timestamp=1421762496210, value=value3
3 row(s) in 0.0230 seconds
----
. Get a single row of data.
+
To get a single row of data at a time, use the `get` command.
+
----
hbase(main):007:0> get 'test', 'row1'
COLUMN CELL
cf:a timestamp=1421762485768, value=value1
1 row(s) in 0.0350 seconds
----
. Disable a table.
+
If you want to delete a table or change its settings, as well as in some other situations, you need to disable the table first, using the `disable` command.
You can re-enable it using the `enable` command.
+
----
hbase(main):008:0> disable 'test'
0 row(s) in 1.1820 seconds
hbase(main):009:0> enable 'test'
0 row(s) in 0.1770 seconds
----
+
Disable the table again if you tested the `enable` command above:
+
----
hbase(main):010:0> disable 'test'
0 row(s) in 1.1820 seconds
----
. Drop the table.
+
To drop (delete) a table, use the `drop` command.
+
----
hbase(main):011:0> drop 'test'
0 row(s) in 0.1370 seconds
----
. Exit the HBase Shell.
+
To exit the HBase Shell and disconnect from your cluster, use the `quit` command.
HBase is still running in the background.
.Procedure: Stop HBase
. In the same way that the _bin/start-hbase.sh_ script is provided to conveniently start all HBase daemons, the _bin/stop-hbase.sh_ script stops them.
+
----
$ ./bin/stop-hbase.sh
stopping hbase....................
$
----
. After issuing the command, it can take several minutes for the processes to shut down.
Use the `jps` to be sure that the HMaster and HRegionServer processes are shut down.
The above has shown you how to start and stop a standalone instance of HBase.
In the next sections we give a quick overview of other modes of hbase deploy.
[[quickstart_pseudo]]
=== Pseudo-Distributed for Local Testing
After working your way through <<quickstart,quickstart>> standalone mode,
you can re-configure HBase to run in pseudo-distributed mode.
Pseudo-distributed mode means that HBase still runs completely on a single host,
but each HBase daemon (HMaster, HRegionServer, and ZooKeeper) runs as a separate process:
in standalone mode all daemons ran in one jvm process/instance.
By default, unless you configure the `hbase.rootdir` property as described in
<<quickstart,quickstart>>, your data is still stored in _/tmp/_.
In this walk-through, we store your data in HDFS instead, assuming you have HDFS available.
You can skip the HDFS configuration to continue storing your data in the local filesystem.
.Hadoop Configuration
[NOTE]
====
This procedure assumes that you have configured Hadoop and HDFS on your local system and/or a remote
system, and that they are running and available. It also assumes you are using Hadoop 2.
The guide on
link:https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-common/SingleCluster.html[Setting up a Single Node Cluster]
in the Hadoop documentation is a good starting point.
====
. Stop HBase if it is running.
+
If you have just finished <<quickstart,quickstart>> and HBase is still running, stop it.
This procedure will create a totally new directory where HBase will store its data, so any databases you created before will be lost.
. Configure HBase.
+
Edit the _hbase-site.xml_ configuration.
First, add the following property which directs HBase to run in distributed mode, with one JVM instance per daemon.
+
[source,xml]
----
<property>
<name>hbase.cluster.distributed</name>
<value>true</value>
</property>
----
+
Next, add a configuration for `hbase.rootdir`, pointing to the address of your HDFS instance, using the `hdfs:////` URI syntax.
In this example, HDFS is running on the localhost at port 8020.
+
[source,xml]
----
<property>
<name>hbase.rootdir</name>
<value>hdfs://localhost:8020/hbase</value>
</property>
----
+
You do not need to create the directory in HDFS.
HBase will do this for you. If you create the directory, HBase will attempt to do a migration, which is not what you want.
+
Finally, remove existing configuration for `hbase.tmp.dir` and `hbase.unsafe.stream.capability.enforce`,
. Start HBase.
+
Use the _bin/start-hbase.sh_ command to start HBase.
If your system is configured correctly, the `jps` command should show the HMaster and HRegionServer processes running.
. Check the HBase directory in HDFS.
+
If everything worked correctly, HBase created its directory in HDFS.
In the configuration above, it is stored in _/hbase/_ on HDFS.
You can use the `hadoop fs` command in Hadoop's _bin/_ directory to list this directory.
+
----
$ ./bin/hadoop fs -ls /hbase
Found 7 items
drwxr-xr-x - hbase users 0 2014-06-25 18:58 /hbase/.tmp
drwxr-xr-x - hbase users 0 2014-06-25 21:49 /hbase/WALs
drwxr-xr-x - hbase users 0 2014-06-25 18:48 /hbase/corrupt
drwxr-xr-x - hbase users 0 2014-06-25 18:58 /hbase/data
-rw-r--r-- 3 hbase users 42 2014-06-25 18:41 /hbase/hbase.id
-rw-r--r-- 3 hbase users 7 2014-06-25 18:41 /hbase/hbase.version
drwxr-xr-x - hbase users 0 2014-06-25 21:49 /hbase/oldWALs
----
. Create a table and populate it with data.
+
You can use the HBase Shell to create a table, populate it with data, scan and get values from it, using the same procedure as in <<shell_exercises,shell exercises>>.
. Start and stop a backup HBase Master (HMaster) server.
+
NOTE: Running multiple HMaster instances on the same hardware does not make sense in a production environment, in the same way that running a pseudo-distributed cluster does not make sense for production.
This step is offered for testing and learning purposes only.
+
The HMaster server controls the HBase cluster.
You can start up to 9 backup HMaster servers, which makes 10 total HMasters, counting the primary.
To start a backup HMaster, use the `local-master-backup.sh`.
For each backup master you want to start, add a parameter representing the port offset for that master.
Each HMaster uses two ports (16000 and 16010 by default). The port offset is added to these ports, so using an offset of 2, the backup HMaster would use ports 16002 and 16012.
The following command starts 3 backup servers using ports 16002/16012, 16003/16013, and 16005/16015.
+
----
$ ./bin/local-master-backup.sh start 2 3 5
----
+
To kill a backup master without killing the entire cluster, you need to find its process ID (PID). The PID is stored in a file with a name like _/tmp/hbase-USER-X-master.pid_.
The only contents of the file is the PID.
You can use the `kill -9` command to kill that PID.
The following command will kill the master with port offset 1, but leave the cluster running:
+
----
$ cat /tmp/hbase-testuser-1-master.pid |xargs kill -9
----
. Start and stop additional RegionServers
+
The HRegionServer manages the data in its StoreFiles as directed by the HMaster.
Generally, one HRegionServer runs per node in the cluster.
Running multiple HRegionServers on the same system can be useful for testing in pseudo-distributed mode.
The `local-regionservers.sh` command allows you to run multiple RegionServers.
It works in a similar way to the `local-master-backup.sh` command, in that each parameter you provide represents the port offset for an instance.
Each RegionServer requires two ports, and the default ports are 16020 and 16030.
Since HBase version 1.1.0, HMaster doesn't use region server ports, this leaves 10 ports (16020 to 16029 and 16030 to 16039) to be used for RegionServers.
For supporting additional RegionServers, set environment variables HBASE_RS_BASE_PORT and HBASE_RS_INFO_BASE_PORT to appropriate values before running script `local-regionservers.sh`.
e.g. With values 16200 and 16300 for base ports, 99 additional RegionServers can be supported, on a server.
The following command starts four additional RegionServers, running on sequential ports starting at 16022/16032 (base ports 16020/16030 plus 2).
+
----
$ .bin/local-regionservers.sh start 2 3 4 5
----
+
To stop a RegionServer manually, use the `local-regionservers.sh` command with the `stop` parameter and the offset of the server to stop.
+
----
$ .bin/local-regionservers.sh stop 3
----
. Stop HBase.
+
You can stop HBase the same way as in the <<quickstart,quickstart>> procedure, using the _bin/stop-hbase.sh_ command.
[[quickstart_fully_distributed]]
=== Fully Distributed for Production
In reality, you need a fully-distributed configuration to fully test HBase and to use it in real-world scenarios.
In a distributed configuration, the cluster contains multiple nodes, each of which runs one or more HBase daemon.
These include primary and backup Master instances, multiple ZooKeeper nodes, and multiple RegionServer nodes.
This advanced quickstart adds two more nodes to your cluster.
The architecture will be as follows:
.Distributed Cluster Demo Architecture
[cols="1,1,1,1", options="header"]
|===
| Node Name | Master | ZooKeeper | RegionServer
| node-a.example.com | yes | yes | no
| node-b.example.com | backup | yes | yes
| node-c.example.com | no | yes | yes
|===
This quickstart assumes that each node is a virtual machine and that they are all on the same network.
It builds upon the previous quickstart, <<quickstart_pseudo>>, assuming that the system you configured in that procedure is now `node-a`.
Stop HBase on `node-a` before continuing.
NOTE: Be sure that all the nodes have full access to communicate, and that no firewall rules are in place which could prevent them from talking to each other.
If you see any errors like `no route to host`, check your firewall.
[[passwordless.ssh.quickstart]]
.Procedure: Configure Passwordless SSH Access
`node-a` needs to be able to log into `node-b` and `node-c` (and to itself) in order to start the daemons.
The easiest way to accomplish this is to use the same username on all hosts, and configure password-less SSH login from `node-a` to each of the others.
. On `node-a`, generate a key pair.
+
While logged in as the user who will run HBase, generate a SSH key pair, using the following command:
+
[source,bash]
----
$ ssh-keygen -t rsa
----
+
If the command succeeds, the location of the key pair is printed to standard output.
The default name of the public key is _id_rsa.pub_.
. Create the directory that will hold the shared keys on the other nodes.
+
On `node-b` and `node-c`, log in as the HBase user and create a _.ssh/_ directory in the user's home directory, if it does not already exist.
If it already exists, be aware that it may already contain other keys.
. Copy the public key to the other nodes.
+
Securely copy the public key from `node-a` to each of the nodes, by using the `scp` or some other secure means.
On each of the other nodes, create a new file called _.ssh/authorized_keys_ _if it does
not already exist_, and append the contents of the _id_rsa.pub_ file to the end of it.
Note that you also need to do this for `node-a` itself.
+
----
$ cat id_rsa.pub >> ~/.ssh/authorized_keys
----
. Test password-less login.
+
If you performed the procedure correctly, you should not be prompted for a password when you SSH from `node-a` to either of the other nodes using the same username.
. Since `node-b` will run a backup Master, repeat the procedure above, substituting `node-b` everywhere you see `node-a`.
Be sure not to overwrite your existing _.ssh/authorized_keys_ files, but concatenate the new key onto the existing file using the `>>` operator rather than the `>` operator.
.Procedure: Prepare `node-a`
`node-a` will run your primary master and ZooKeeper processes, but no RegionServers. Stop the RegionServer from starting on `node-a`.
. Edit _conf/regionservers_ and remove the line which contains `localhost`. Add lines with the hostnames or IP addresses for `node-b` and `node-c`.
+
Even if you did want to run a RegionServer on `node-a`, you should refer to it by the hostname the other servers would use to communicate with it.
In this case, that would be `node-a.example.com`.
This enables you to distribute the configuration to each node of your cluster any hostname conflicts.
Save the file.
. Configure HBase to use `node-b` as a backup master.
+
Create a new file in _conf/_ called _backup-masters_, and add a new line to it with the hostname for `node-b`.
In this demonstration, the hostname is `node-b.example.com`.
. Configure ZooKeeper
+
In reality, you should carefully consider your ZooKeeper configuration.
You can find out more about configuring ZooKeeper in <<zookeeper,zookeeper>> section.
This configuration will direct HBase to start and manage a ZooKeeper instance on each node of the cluster.
+
On `node-a`, edit _conf/hbase-site.xml_ and add the following properties.
+
[source,xml]
----
<property>
<name>hbase.zookeeper.quorum</name>
<value>node-a.example.com,node-b.example.com,node-c.example.com</value>
</property>
<property>
<name>hbase.zookeeper.property.dataDir</name>
<value>/usr/local/zookeeper</value>
</property>
----
. Everywhere in your configuration that you have referred to `node-a` as `localhost`, change the reference to point to the hostname that the other nodes will use to refer to `node-a`.
In these examples, the hostname is `node-a.example.com`.
.Procedure: Prepare `node-b` and `node-c`
`node-b` will run a backup master server and a ZooKeeper instance.
. Download and unpack HBase.
+
Download and unpack HBase to `node-b`, just as you did for the standalone and pseudo-distributed quickstarts.
. Copy the configuration files from `node-a` to `node-b`.and `node-c`.
+
Each node of your cluster needs to have the same configuration information.
Copy the contents of the _conf/_ directory to the _conf/_ directory on `node-b` and `node-c`.
.Procedure: Start and Test Your Cluster
. Be sure HBase is not running on any node.
+
If you forgot to stop HBase from previous testing, you will have errors.
Check to see whether HBase is running on any of your nodes by using the `jps` command.
Look for the processes `HMaster`, `HRegionServer`, and `HQuorumPeer`.
If they exist, kill them.
. Start the cluster.
+
On `node-a`, issue the `start-hbase.sh` command.
Your output will be similar to that below.
+
----
$ bin/start-hbase.sh
node-c.example.com: starting zookeeper, logging to /home/hbuser/hbase-0.98.3-hadoop2/bin/../logs/hbase-hbuser-zookeeper-node-c.example.com.out
node-a.example.com: starting zookeeper, logging to /home/hbuser/hbase-0.98.3-hadoop2/bin/../logs/hbase-hbuser-zookeeper-node-a.example.com.out
node-b.example.com: starting zookeeper, logging to /home/hbuser/hbase-0.98.3-hadoop2/bin/../logs/hbase-hbuser-zookeeper-node-b.example.com.out
starting master, logging to /home/hbuser/hbase-0.98.3-hadoop2/bin/../logs/hbase-hbuser-master-node-a.example.com.out
node-c.example.com: starting regionserver, logging to /home/hbuser/hbase-0.98.3-hadoop2/bin/../logs/hbase-hbuser-regionserver-node-c.example.com.out
node-b.example.com: starting regionserver, logging to /home/hbuser/hbase-0.98.3-hadoop2/bin/../logs/hbase-hbuser-regionserver-node-b.example.com.out
node-b.example.com: starting master, logging to /home/hbuser/hbase-0.98.3-hadoop2/bin/../logs/hbase-hbuser-master-nodeb.example.com.out
----
+
ZooKeeper starts first, followed by the master, then the RegionServers, and finally the backup masters.
. Verify that the processes are running.
+
On each node of the cluster, run the `jps` command and verify that the correct processes are running on each server.
You may see additional Java processes running on your servers as well, if they are used for other purposes.
+
.`node-a` `jps` Output
----
$ jps
20355 Jps
20071 HQuorumPeer
20137 HMaster
----
+
.`node-b` `jps` Output
----
$ jps
15930 HRegionServer
16194 Jps
15838 HQuorumPeer
16010 HMaster
----
+
.`node-c` `jps` Output
----
$ jps
13901 Jps
13639 HQuorumPeer
13737 HRegionServer
----
+
.ZooKeeper Process Name
[NOTE]
====
The `HQuorumPeer` process is a ZooKeeper instance which is controlled and started by HBase.
If you use ZooKeeper this way, it is limited to one instance per cluster node and is appropriate for testing only.
If ZooKeeper is run outside of HBase, the process is called `QuorumPeer`.
For more about ZooKeeper configuration, including using an external ZooKeeper instance with HBase, see <<zookeeper,zookeeper>> section.
====
. Browse to the Web UI.
+
.Web UI Port Changes
[NOTE]
====
In HBase newer than 0.98.x, the HTTP ports used by the HBase Web UI changed from 60010 for the
Master and 60030 for each RegionServer to 16010 for the Master and 16030 for the RegionServer.
====
+
If everything is set up correctly, you should be able to connect to the UI for the Master
`http://node-a.example.com:16010/` or the secondary master at `http://node-b.example.com:16010/`
using a web browser.
If you can connect via `localhost` but not from another host, check your firewall rules.
You can see the web UI for each of the RegionServers at port 16030 of their IP addresses, or by
clicking their links in the web UI for the Master.
. Test what happens when nodes or services disappear.
+
With a three-node cluster you have configured, things will not be very resilient.
You can still test the behavior of the primary Master or a RegionServer by killing the associated processes and watching the logs.
=== Where to go next
The next chapter, <<configuration,configuration>>, gives more information about the different HBase run modes, system requirements for running HBase, and critical configuration areas for setting up a distributed HBase cluster.

File diff suppressed because it is too large Load Diff

View File

@ -1,133 +0,0 @@
////
/**
*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
////
[[hbase_apis]]
= Apache HBase APIs
:doctype: book
:numbered:
:toc: left
:icons: font
:experimental:
This chapter provides information about performing operations using HBase native APIs.
This information is not exhaustive, and provides a quick reference in addition to the link:https://hbase.apache.org/apidocs/index.html[User API Reference].
The examples here are not comprehensive or complete, and should be used for purposes of illustration only.
Apache HBase also works with multiple external APIs.
See <<external_apis>> for more information.
== Examples
.Create, modify and delete a Table Using Java
====
[source,java]
----
package com.example.hbase.admin;
import java.io.IOException;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.hbase.HBaseConfiguration;
import org.apache.hadoop.hbase.HColumnDescriptor;
import org.apache.hadoop.hbase.HConstants;
import org.apache.hadoop.hbase.HTableDescriptor;
import org.apache.hadoop.hbase.TableName;
import org.apache.hadoop.hbase.client.Admin;
import org.apache.hadoop.hbase.client.Connection;
import org.apache.hadoop.hbase.client.ConnectionFactory;
import org.apache.hadoop.hbase.io.compress.Compression.Algorithm;
public class Example {
private static final String TABLE_NAME = "MY_TABLE_NAME_TOO";
private static final String CF_DEFAULT = "DEFAULT_COLUMN_FAMILY";
public static void createOrOverwrite(Admin admin, HTableDescriptor table) throws IOException {
if (admin.tableExists(table.getTableName())) {
admin.disableTable(table.getTableName());
admin.deleteTable(table.getTableName());
}
admin.createTable(table);
}
public static void createSchemaTables(Configuration config) throws IOException {
try (Connection connection = ConnectionFactory.createConnection(config);
Admin admin = connection.getAdmin()) {
HTableDescriptor table = new HTableDescriptor(TableName.valueOf(TABLE_NAME));
table.addFamily(new HColumnDescriptor(CF_DEFAULT).setCompressionType(Algorithm.NONE));
System.out.print("Creating table. ");
createOrOverwrite(admin, table);
System.out.println(" Done.");
}
}
public static void modifySchema (Configuration config) throws IOException {
try (Connection connection = ConnectionFactory.createConnection(config);
Admin admin = connection.getAdmin()) {
TableName tableName = TableName.valueOf(TABLE_NAME);
if (!admin.tableExists(tableName)) {
System.out.println("Table does not exist.");
System.exit(-1);
}
HTableDescriptor table = admin.getTableDescriptor(tableName);
// Update existing table
HColumnDescriptor newColumn = new HColumnDescriptor("NEWCF");
newColumn.setCompactionCompressionType(Algorithm.GZ);
newColumn.setMaxVersions(HConstants.ALL_VERSIONS);
admin.addColumn(tableName, newColumn);
// Update existing column family
HColumnDescriptor existingColumn = new HColumnDescriptor(CF_DEFAULT);
existingColumn.setCompactionCompressionType(Algorithm.GZ);
existingColumn.setMaxVersions(HConstants.ALL_VERSIONS);
table.modifyFamily(existingColumn);
admin.modifyTable(tableName, table);
// Disable an existing table
admin.disableTable(tableName);
// Delete an existing column family
admin.deleteColumn(tableName, CF_DEFAULT.getBytes("UTF-8"));
// Delete a table (Need to be disabled first)
admin.deleteTable(tableName);
}
}
public static void main(String... args) throws IOException {
Configuration config = HBaseConfiguration.create();
//Add any necessary configuration files (hbase-site.xml, core-site.xml)
config.addResource(new Path(System.getenv("HBASE_CONF_DIR"), "hbase-site.xml"));
config.addResource(new Path(System.getenv("HADOOP_CONF_DIR"), "core-site.xml"));
createSchemaTables(config);
modifySchema(config);
}
}
----
====

View File

@ -1,37 +0,0 @@
////
/**
*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
////
[appendix]
[[hbase.history]]
== HBase History
:doctype: book
:numbered:
:toc: left
:icons: font
:experimental:
* 2006: link:http://research.google.com/archive/bigtable.html[BigTable] paper published by Google.
* 2006 (end of year): HBase development starts.
* 2008: HBase becomes Hadoop sub-project.
* 2010: HBase becomes Apache top-level project.
:numbered:

View File

@ -1,675 +0,0 @@
////
/**
*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
////
[[hbase_mob]]
== Storing Medium-sized Objects (MOB)
:doctype: book
:numbered:
:toc: left
:icons: font
:experimental:
:toc: left
:source-language: java
Data comes in many sizes, and saving all of your data in HBase, including binary
data such as images and documents, is ideal. While HBase can technically handle
binary objects with cells that are larger than 100 KB in size, HBase's normal
read and write paths are optimized for values smaller than 100KB in size. When
HBase deals with large numbers of objects over this threshold, referred to here
as medium objects, or MOBs, performance is degraded due to write amplification
caused by splits and compactions. When using MOBs, ideally your objects will be between
100KB and 10MB (see the <<faq>>). HBase 2 added special internal handling of MOBs
to maintain performance, consistency, and low operational overhead. MOB support is
provided by the work done in link:https://issues.apache.org/jira/browse/HBASE-11339[HBASE-11339].
To take advantage of MOB, you need to use <<hfilev3,HFile version 3>>. Optionally,
configure the MOB file reader's cache settings for each RegionServer (see
<<mob.cache.configure>>), then configure specific columns to hold MOB data.
Client code does not need to change to take advantage of HBase MOB support. The
feature is transparent to the client.
=== Configuring Columns for MOB
You can configure columns to support MOB during table creation or alteration,
either in HBase Shell or via the Java API. The two relevant properties are the
boolean `IS_MOB` and the `MOB_THRESHOLD`, which is the number of bytes at which
an object is considered to be a MOB. Only `IS_MOB` is required. If you do not
specify the `MOB_THRESHOLD`, the default threshold value of 100 KB is used.
.Configure a Column for MOB Using HBase Shell
----
hbase> create 't1', {NAME => 'f1', IS_MOB => true, MOB_THRESHOLD => 102400}
hbase> alter 't1', {NAME => 'f1', IS_MOB => true, MOB_THRESHOLD => 102400}
----
.Configure a Column for MOB Using the Java API
====
[source,java]
----
...
HColumnDescriptor hcd = new HColumnDescriptor(“f”);
hcd.setMobEnabled(true);
...
hcd.setMobThreshold(102400L);
...
----
====
=== Testing MOB
The utility `org.apache.hadoop.hbase.IntegrationTestIngestWithMOB` is provided to assist with testing
the MOB feature. The utility is run as follows:
[source,bash]
----
$ sudo -u hbase hbase org.apache.hadoop.hbase.IntegrationTestIngestWithMOB \
-threshold 1024 \
-minMobDataSize 512 \
-maxMobDataSize 5120
----
* `*threshold*` is the threshold at which cells are considered to be MOBs.
The default is 1 kB, expressed in bytes.
* `*minMobDataSize*` is the minimum value for the size of MOB data.
The default is 512 B, expressed in bytes.
* `*maxMobDataSize*` is the maximum value for the size of MOB data.
The default is 5 kB, expressed in bytes.
=== MOB architecture
This section is derived from information found in
link:https://issues.apache.org/jira/browse/HBASE-11339[HBASE-11339], which covered the initial GA
implementation of MOB in HBase and
link:https://issues.apache.org/jira/browse/HBASE-22749[HBASE-22749], which improved things by
parallelizing MOB maintenance across the RegionServers. For more information see
the last version of the design doc created during the initial work,
"link:https://github.com/apache/hbase/blob/master/dev-support/design-docs/HBASE-11339%20MOB%20GA%20design.pdf[HBASE-11339 MOB GA design.pdf]",
and the design doc for the distributed mob compaction feature,
"link:https://github.com/apache/hbase/blob/master/dev-support/design-docs/HBASE-22749%20MOB%20distributed%20compaction.pdf[HBASE-22749 MOB distributed compaction.pdf]".
==== Overview
The MOB feature reduces the overall IO load for configured column families by storing values that
are larger than the configured threshold outside of the normal regions to avoid splits, merges, and
most importantly normal compactions.
When a cell is first written to a region it is stored in the WAL and memstore regardless of value
size. When memstores from a column family configured to use MOB are eventually flushed two hfiles
are written simultaneously. Cells with a value smaller than the threshold size are written to a
normal region hfile. Cells with a value larger than the threshold are written into a special MOB
hfile and also have a MOB reference cell written into the normal region HFile. As the Region Server
flushes a MOB enabled memstore and closes a given normal region HFile it appends metadata that lists
each of the special MOB hfiles referenced by the cells within.
MOB reference cells have the same key as the cell they are based on. The value of the reference cell
is made up of two pieces of metadata: the size of the actual value and the MOB hfile that contains
the original cell. In addition to any tags originally written to HBase, the reference cell prepends
two additional tags. The first is a marker tag that says the cell is a MOB reference. This can be
used later to scan specifically just for reference cells. The second stores the namespace and table
at the time the MOB hfile is written out. This tag is used to optimize how the MOB system finds
the underlying value in MOB hfiles after a series of HBase snapshot operations (ref HBASE-12332).
Note that tags are only available within HBase servers and by default are not sent over RPCs.
All MOB hfiles for a given table are managed within a logical region that does not directly serve
requests. When these MOB hfiles are created from a flush or MOB compaction they are placed in a
dedicated mob data area under the hbase root directory specific to the namespace, table, mob
logical region, and column family. In general that means a path structured like:
----
%HBase Root Dir%/mobdir/data/%namespace%/%table%/%logical region%/%column family%/
----
With default configs, an example table named 'some_table' in the
default namespace with a MOB enabled column family named 'foo' this HDFS directory would be
----
/hbase/mobdir/data/default/some_table/372c1b27e3dc0b56c3a031926e5efbe9/foo/
----
These MOB hfiles are maintained by special chores in the HBase Master and across the individual
Region Servers. Specifically those chores take care of enforcing TTLs and compacting them. Note that
this compaction is primarily a matter of controlling the total number of files in HDFS because our
operational assumptions for MOB data is that it will seldom update or delete.
When a given MOB hfile is no longer needed as a result of our compaction process then a chore in
the Master will take care of moving it to the archive just
like any normal hfile. Because the table's mob region is independent of all the normal regions it
can coexist with them in the regular archive storage area:
----
/hbase/archive/data/default/some_table/372c1b27e3dc0b56c3a031926e5efbe9/foo/
----
The same hfile cleaning chores that take care of eventually deleting unneeded archived files from
normal regions thus also will take care of these MOB hfiles. As such, if there is a snapshot of a
MOB enabled table then the cleaning system will make sure those MOB files stick around in the
archive area as long as they are needed by a snapshot or a clone of a snapshot.
==== MOB compaction
Each time the memstore for a MOB enabled column family performs a flush HBase will write values over
the MOB threshold into MOB specific hfiles. When normal region compaction occurs the Region Server
rewrites the normal data files while maintaining references to these MOB files without rewriting
them. Normal client lookups for MOB values transparently will receive the original values because
the Region Server internals take care of using the reference data to then pull the value out of a
specific MOB file. This indirection means that building up a large number of MOB hfiles doesn't
impact the overall time to retrieve any specific MOB cell. Thus, we need not perform compactions of
the MOB hfiles nearly as often as normal hfiles. As a result, HBase saves IO by not rewriting MOB
hfiles as a part of the periodic compactions a Region Server does on its own.
However, if deletes and updates of MOB cells are frequent then this indirection will begin to waste
space. The only way to stop using the space of a particular MOB hfile is to ensure no cells still
hold references to it. To do that we need to ensure we have written the current values into a new
MOB hfile. If our backing filesystem has a limitation on the number of files that can be present, as
HDFS does, then even if we do not have deletes or updates of MOB cells eventually there will be a
sufficient number of MOB hfiles that we will need to coallesce them.
Periodically a chore in the master coordinates having the region servers
perform a special major compaction that also handles rewritting new MOB files. Like all compactions
the Region Server will create updated hfiles that hold both the cells that are smaller than the MOB
threshold and cells that hold references to the newly rewritten MOB file. Because this rewriting has
the advantage of looking across all active cells for the region our several small MOB files should
end up as a single MOB file per region. The chore defaults to running weekly and can be
configured by setting `hbase.mob.compaction.chore.period` to the desired period in seconds.
====
[source,xml]
----
<property>
<name>hbase.mob.compaction.chore.period</name>
<value>2592000</value>
<description>Example of changing the chore period from a week to a month.</description>
</property>
----
====
By default, the periodic MOB compaction coordination chore will attempt to keep every region
busy doing compactions in parallel in order to maximize the amount of work done on the cluster.
If you need to tune the amount of IO this compaction generates on the underlying filesystem, you
can control how many concurrent region-level compaction requests are allowed by setting
`hbase.mob.major.compaction.region.batch.size` to an integer number greater than zero. If you set
the configuration to 0 then you will get the default behavior of attempting to do all regions in
parallel.
====
[source,xml]
----
<property>
<name>hbase.mob.major.compaction.region.batch.size</name>
<value>1</value>
<description>Example of switching from "as parallel as possible" to "serially"</description>
</property>
----
====
==== MOB file archiving
Eventually we will have MOB hfiles that are no longer needed. Either clients will overwrite the
value or a MOB-rewriting compaction will store a reference to a newer larger MOB hfile. Because any
given MOB cell could have originally been written either in the current region or in a parent region
that existed at some prior point in time, individual Region Servers do not decide when it is time
to archive MOB hfiles. Instead a periodic chore in the Master evaluates MOB hfiles for archiving.
A MOB HFile will be subject to archiving under any of the following conditions:
* Any MOB HFile older than the column family's TTL
* Any MOB HFile older than a "too recent" threshold with no references to it from the regular hfiles
for all regions in a column family
To determine if a MOB HFile meets the second criteria the chore extracts metadata from the regular
HFiles for each MOB enabled column family for a given table. That metadata enumerates the complete
set of MOB HFiles needed to satisfy the references stored in the normal HFile area.
The period of the cleaner chore can be configued by setting `hbase.master.mob.cleaner.period` to a
positive integer number of seconds. It defaults to running daily. You should not need to tune it
unless you have a very aggressive TTL or a very high rate of MOB updates with a correspondingly
high rate of non-MOB compactions.
=== MOB Optimization Tasks
==== Further limiting write amplification
If your MOB workload has few to no updates or deletes then you can opt-in to MOB compactions that
optimize for limiting the amount of write amplification. It acheives this by setting a
size threshold to ignore MOB files during the compaction process. When a given region goes
through MOB compaction it will evaluate the size of the MOB file that currently holds the actual
value and skip rewriting the value if that file is over threshold.
The bound of write amplification in this mode can be approximated as
stem:["Write Amplification" = log_K(M/S)] where *K* is the number of files in compaction
selection, *M* is the configurable threshold for MOB files size, and *S* is the minmum size of
memstore flushes that create MOB files in the first place. For example given 5 files picked up per
compaction, a threshold of 1 GB, and a flush size of 10MB the write amplification will be
stem:[log_5((1GB)/(10MB)) = log_5(100) = 2.86].
If we are using an underlying filesystem with a limitation on the number of files, such as HDFS,
and we know our expected data set size we can choose our maximum file size in order to approach
this limit but stay within it in order to minimize write amplification. For example, if we expect to
store a petabyte and we have a conservative limitation of a million files in our HDFS instance, then
stem:[(1PB)/(1M) = 1GB] gives us a target limitation of a gigabyte per MOB file.
To opt-in to this compaction mode you must set `hbase.mob.compaction.type` to `optimized`. The
default MOB size threshold in this mode is set to 1GB. It can be changed by setting
`hbase.mob.compactions.max.file.size` to a positive integer number of bytes.
====
[source,xml]
----
<property>
<name>hbase.mob.compaction.type</name>
<value>optimized</value>
<description>opt-in to write amplification optimized mob compaction.</description>
</property>
<property>
<name>hbase.mob.compactions.max.file.size</name>
<value>10737418240</value>
<description>Example of tuning the max mob file size to 10GB</dscription>
</property>
----
====
Additionally, when operating in this mode the compaction process will seek to avoid writing MOB
files that are over the max file threshold. As it is writing out a additional MOB values into a MOB
hfile it will check to see if the additional data causes the hfile to be over the max file size.
When the hfile of MOB values reaches limit, the MOB hfile is committed to the MOB storage area and
a new one is created. The hfile with reference cells will track the complete set of MOB hfiles it
needs in its metadata.
.Be mindful of total time to complete compaction of a region
[WARNING]
====
When using the write amplification optimized compaction mode you need to watch for the maximum time
to compact a single region. If it nears an hour you should read through the troubleshooting section
below <<mob.troubleshoot.cleaner.toonew>>. Failure to make the adjustments discussed there could
lead to dataloss.
====
[[mob.cache.configure]]
==== Configuring the MOB Cache
Because there can be a large number of MOB files at any time, as compared to the number of HFiles,
MOB files are not always kept open. The MOB file reader cache is a LRU cache which keeps the most
recently used MOB files open. To configure the MOB file reader's cache on each RegionServer, add
the following properties to the RegionServer's `hbase-site.xml`, customize the configuration to
suit your environment, and restart or rolling restart the RegionServer.
.Example MOB Cache Configuration
====
[source,xml]
----
<property>
<name>hbase.mob.file.cache.size</name>
<value>1000</value>
<description>
Number of opened file handlers to cache.
A larger value will benefit reads by providing more file handlers per mob
file cache and would reduce frequent file opening and closing.
However, if this is set too high, this could lead to a "too many opened file handers"
The default value is 1000.
</description>
</property>
<property>
<name>hbase.mob.cache.evict.period</name>
<value>3600</value>
<description>
The amount of time in seconds after which an unused file is evicted from the
MOB cache. The default value is 3600 seconds.
</description>
</property>
<property>
<name>hbase.mob.cache.evict.remain.ratio</name>
<value>0.5f</value>
<description>
A multiplier (between 0.0 and 1.0), which determines how many files remain cached
after the threshold of files that remains cached after a cache eviction occurs
which is triggered by reaching the `hbase.mob.file.cache.size` threshold.
The default value is 0.5f, which means that half the files (the least-recently-used
ones) are evicted.
</description>
</property>
----
====
==== Manually Compacting MOB Files
To manually compact MOB files, rather than waiting for the
periodic chore to trigger compaction, use the
`major_compact` HBase shell commands. These commands
require the first argument to be the table name, and take a column
family as the second argument. If used with a column family that includes MOB data, then
these operator requests will result in the MOB data being compacted.
----
hbase> major_compact 't1'
hbase> major_compact 't2', 'c1
----
This same request can be made via the `Admin.majorCompact` Java API.
=== MOB Troubleshooting
[[mob.troubleshoot.cleaner.toonew]]
==== Adjusting the MOB cleaner's tolerance for new hfiles
The MOB cleaner chore ignores all MOB hfiles that were created more recently than an hour prior to
the start of the chore to ensure we don't miss the reference metadata from the corresponding regular
hfile. Without this safety check it would be possible for the cleaner chore to see a MOB hfile for
an in progress flush or compaction and prematurely archive the MOB data. This default buffer should
be sufficient for normal use.
You will need to adjust the tolerance if you use write amplification optimized MOB compaction and
the combination of your underlying filesystem performance and data shape is such that it could take
more than an hour to complete major compaction of a single region. For example, if your MOB data is
distributed such that your largest region adds 80GB of MOB data between compactions that include
rewriting MOB data and your HDFS cluster is only capable of writing 20MB/s for a single file then
when performing the optimized compaction the Region Server will take about a minute to write the
first 1GB MOB hfile and then another hour and seven minutes to write the remaining seventy-nine 1GB
MOB hfiles before finally committing the new reference hfile at the end of the compaction. Given
this example, you would need a larger tolerance window.
You will also need to adjust the tolerance if Region Server flush operations take longer than an
hour for the two HDFS move operations needed to commit both the MOB hfile and the normal hfile that
references it. Such a delay should not happen with a normally configured and healthy HDFS and HBase.
The cleaner's window for "too recent" is controlled by setting `hbase.mob.min.age.archive` to a
positive integer number of milliseconds.
====
[source,xml]
----
<property>
<name>hbase.mob.min.age.archive</name>
<value>86400000</value>
<description>Example of tuning the cleaner to only archive files older than a day.</dscription>
</property>
----
====
==== Retrieving MOB metadata through the HBase Shell
While working on troubleshooting failures in the MOB system you can retrieve some of the internal
information through the HBase shell by specifying special attributes on a scan.
----
hbase(main):112:0> scan 'some_table', {STARTROW => '00012-example-row-key', LIMIT => 1,
hbase(main):113:1* CACHE_BLOCKS => false, ATTRIBUTES => { 'hbase.mob.scan.raw' => '1',
hbase(main):114:2* 'hbase.mob.scan.ref.only' => '1' } }
----
The MOB internal information is stored as four bytes for the size of the underlying cell value and
then a UTF8 string with the name of the MOB HFile that contains the underlying cell value. Note that
by default the entirety of this serialized structure will be passed through the HBase shell's binary
string converter. That means the bytes that make up the value size will most likely be written as
escaped non-printable byte values, e.g. '\x03', unless they happen to correspond to ASCII
characters.
Let's look at a specific example:
----
hbase(main):112:0> scan 'some_table', {STARTROW => '00012-example-row-key', LIMIT => 1,
hbase(main):113:1* CACHE_BLOCKS => false, ATTRIBUTES => { 'hbase.mob.scan.raw' => '1',
hbase(main):114:2* 'hbase.mob.scan.ref.only' => '1' } }
ROW COLUMN+CELL
00012-example-row-key column=foo:bar, timestamp=1511179764, value=\x00\x02|\x94d41d8cd98f00b204
e9800998ecf8427e19700118ffd9c244fe69488bbc9f2c77d24a3e6a
1 row(s) in 0.0130 seconds
----
In this case the first four bytes are `\x00\x02|\x94` which corresponds to the bytes
`[0x00, 0x02, 0x7C, 0x94]`. (Note that the third byte was printed as the ASCII character '|'.)
Decoded as an integer this gives us an underlying value size of 162,964 bytes.
The remaining bytes give us an HFile name,
'd41d8cd98f00b204e9800998ecf8427e19700118ffd9c244fe69488bbc9f2c77d24a3e6a'. This HFile will most
likely be stored in the designated MOB storage area for this specific table. However, the file could
also be in the archive area if this table is from a restored snapshot. Furthermore, if the table is
from a cloned snapshot of a different table then the file could be in either the active or archive
area of that source table. As mentioned in the explanation of MOB reference cells above, the Region
Server will use a server side tag to optimize looking at the mob and archive area of the correct
original table when finding the MOB HFile. Since your scan is client side it can't retrieve that tag
and you'll either need to already know the lineage of your table or you'll need to search across all
tables.
Assuming you are authenticated as a user with HBase superuser rights, you can search for it:
----
$> hdfs dfs -find /hbase -name \
d41d8cd98f00b204e9800998ecf8427e19700118ffd9c244fe69488bbc9f2c77d24a3e6a
/hbase/mobdir/data/default/some_table/372c1b27e3dc0b56c3a031926e5efbe9/foo/d41d8cd98f00b204e9800998ecf8427e19700118ffd9c244fe69488bbc9f2c77d24a3e6a
----
==== Moving a column family out of MOB
If you want to disable MOB on a column family you must ensure you instruct HBase to migrate the data
out of the MOB system prior to turning the feature off. If you fail to do this HBase will return the
internal MOB metadata to applications because it will not know that it needs to resolve the actual
values.
The following procedure will safely migrate the underlying data without requiring a cluster outage.
Clients will see a number of retries when configuration settings are applied and regions are
reloaded.
.Procedure: Stop MOB maintenance, change MOB threshold, rewrite data via compaction
. Ensure the MOB compaction chore in the Master is off by setting
`hbase.mob.file.compaction.chore.period` to `0`. Applying this configuration change will require a
rolling restart of HBase Masters. That will require at least one fail-over of the active master,
which may cause retries for clients doing HBase administrative operations.
. Ensure no MOB compactions are issued for the table via the HBase shell for the duration of this
migration.
. Use the HBase shell to change the MOB size threshold for the column family you are migrating to a
value that is larger than the largest cell present in the column family. E.g. given a table named
'some_table' and a column family named 'foo' we can pick one gigabyte as an arbitrary "bigger than
what we store" value:
+
----
hbase(main):011:0> alter 'some_table', {NAME => 'foo', MOB_THRESHOLD => '1000000000'}
Updating all regions with the new schema...
9/25 regions updated.
25/25 regions updated.
Done.
0 row(s) in 3.4940 seconds
----
+
Note that if you are still ingesting data you must ensure this threshold is larger than any cell
value you might write; MAX_INT would be a safe choice.
. Perform a major compaction on the table. Specifically you are performing a "normal" compaction and
not a MOB compaction.
+
----
hbase(main):012:0> major_compact 'some_table'
0 row(s) in 0.2600 seconds
----
. Monitor for the end of the major compaction. Since compaction is handled asynchronously you'll
need to use the shell to first see the compaction start and then see it end.
+
HBase should first say that a "MAJOR" compaction is happening.
+
----
hbase(main):015:0> @hbase.admin(@formatter).instance_eval do
hbase(main):016:1* p @admin.get_compaction_state('some_table').to_string
hbase(main):017:2* end
“MAJOR”
----
+
When the compaction has finished the result should print out "NONE".
+
----
hbase(main):015:0> @hbase.admin(@formatter).instance_eval do
hbase(main):016:1* p @admin.get_compaction_state('some_table').to_string
hbase(main):017:2* end
“NONE”
----
. Run the _mobrefs_ utility to ensure there are no MOB cells. Specifically, the tool will launch a
Hadoop MapReduce job that will show a job counter of 0 input records when we've successfully
rewritten all of the data.
+
----
$> HADOOP_CLASSPATH=/etc/hbase/conf:$(hbase mapredcp) yarn jar \
/some/path/to/hbase-shaded-mapreduce.jar mobrefs mobrefs-report-output some_table foo
...
19/12/10 11:38:47 INFO impl.YarnClientImpl: Submitted application application_1575695902338_0004
19/12/10 11:38:47 INFO mapreduce.Job: The url to track the job: https://rm-2.example.com:8090/proxy/application_1575695902338_0004/
19/12/10 11:38:47 INFO mapreduce.Job: Running job: job_1575695902338_0004
19/12/10 11:38:57 INFO mapreduce.Job: Job job_1575695902338_0004 running in uber mode : false
19/12/10 11:38:57 INFO mapreduce.Job: map 0% reduce 0%
19/12/10 11:39:07 INFO mapreduce.Job: map 7% reduce 0%
19/12/10 11:39:17 INFO mapreduce.Job: map 13% reduce 0%
19/12/10 11:39:19 INFO mapreduce.Job: map 33% reduce 0%
19/12/10 11:39:21 INFO mapreduce.Job: map 40% reduce 0%
19/12/10 11:39:22 INFO mapreduce.Job: map 47% reduce 0%
19/12/10 11:39:23 INFO mapreduce.Job: map 60% reduce 0%
19/12/10 11:39:24 INFO mapreduce.Job: map 73% reduce 0%
19/12/10 11:39:27 INFO mapreduce.Job: map 100% reduce 0%
19/12/10 11:39:35 INFO mapreduce.Job: map 100% reduce 100%
19/12/10 11:39:35 INFO mapreduce.Job: Job job_1575695902338_0004 completed successfully
19/12/10 11:39:35 INFO mapreduce.Job: Counters: 54
...
Map-Reduce Framework
Map input records=0
...
19/12/09 22:41:28 INFO mapreduce.MobRefReporter: Finished creating report for 'some_table', family='foo'
----
+
If the data has not successfully been migrated out, this report will show both a non-zero number
of input records and a count of mob cells.
+
----
$> HADOOP_CLASSPATH=/etc/hbase/conf:$(hbase mapredcp) yarn jar \
/some/path/to/hbase-shaded-mapreduce.jar mobrefs mobrefs-report-output some_table foo
...
19/12/10 11:44:18 INFO impl.YarnClientImpl: Submitted application application_1575695902338_0005
19/12/10 11:44:18 INFO mapreduce.Job: The url to track the job: https://busbey-2.gce.cloudera.com:8090/proxy/application_1575695902338_0005/
19/12/10 11:44:18 INFO mapreduce.Job: Running job: job_1575695902338_0005
19/12/10 11:44:26 INFO mapreduce.Job: Job job_1575695902338_0005 running in uber mode : false
19/12/10 11:44:26 INFO mapreduce.Job: map 0% reduce 0%
19/12/10 11:44:36 INFO mapreduce.Job: map 7% reduce 0%
19/12/10 11:44:45 INFO mapreduce.Job: map 13% reduce 0%
19/12/10 11:44:47 INFO mapreduce.Job: map 27% reduce 0%
19/12/10 11:44:48 INFO mapreduce.Job: map 33% reduce 0%
19/12/10 11:44:50 INFO mapreduce.Job: map 40% reduce 0%
19/12/10 11:44:51 INFO mapreduce.Job: map 53% reduce 0%
19/12/10 11:44:52 INFO mapreduce.Job: map 73% reduce 0%
19/12/10 11:44:54 INFO mapreduce.Job: map 100% reduce 0%
19/12/10 11:44:59 INFO mapreduce.Job: map 100% reduce 100%
19/12/10 11:45:00 INFO mapreduce.Job: Job job_1575695902338_0005 completed successfully
19/12/10 11:45:00 INFO mapreduce.Job: Counters: 54
...
Map-Reduce Framework
Map input records=1
...
MOB
NUM_CELLS=1
...
19/12/10 11:45:00 INFO mapreduce.MobRefReporter: Finished creating report for 'some_table', family='foo'
----
+
If this happens you should verify that MOB compactions are disabled, verify that you have picked
a sufficiently large MOB threshold, and redo the major compaction step.
. When the _mobrefs_ report shows that no more data is stored in the MOB system then you can safely
alter the column family configuration so that the MOB feature is disabled.
+
----
hbase(main):017:0> alter 'some_table', {NAME => 'foo', IS_MOB => 'false'}
Updating all regions with the new schema...
8/25 regions updated.
25/25 regions updated.
Done.
0 row(s) in 2.9370 seconds
----
. After the column family no longer shows the MOB feature enabled, it is safe to start MOB
maintenance chores again. You can allow the default to be used for
`hbase.mob.file.compaction.chore.period` by removing it from your configuration files or restore
it to whatever custom value you had prior to starting this process.
. Once the MOB feature is disabled for the column family there will be no internal HBase process
looking for data in the MOB storage area specific to this column family. There will still be data
present there from prior to the compaction process that rewrote the values into HBase's data area.
You can check for this residual data directly in HDFS as an HBase superuser.
+
----
$ hdfs dfs -count /hbase/mobdir/data/default/some_table
4 54 9063269081 /hbase/mobdir/data/default/some_table
----
+
This data is spurious and may be reclaimed. You should sideline it, verify your applications view
of the table, and then delete it.
=== MOB Upgrade Considerations
Generally, data stored using the MOB feature should transparently continue to work correctly across
HBase upgrades.
==== Upgrading to a version with the "distributed MOB compaction" feature
Prior to the work in HBASE-22749, "Distributed MOB compactions", HBase had the Master coordinate all
compaction maintenance of the MOB hfiles. Centralizing management of the MOB data allowed for space
optimizations but safely coordinating that managemet with Region Servers resulted in edge cases that
caused data loss (ref link:https://issues.apache.org/jira/browse/HBASE-22075[HBASE-22075]).
Users of the MOB feature upgrading to a version of HBase that includes HBASE-22749 should be aware
of the following changes:
* The MOB system no longer allows setting "MOB Compaction Policies"
* The MOB system no longer attempts to group MOB values by the date of the original cell's timestamp
according to said compaction policies, daily or otherwise
* The MOB system no longer needs to track individual cell deletes through the use of special
files in the MOB storage area with the suffix `_del`. After upgrading you should sideline these
files.
* Under default configuration the MOB system should take much less time to perform a compaction of
MOB stored values. This is a direct consequence of the fact that HBase will place a much larger
load on the underlying filesystem when doing compactions of MOB stored values; the additional load
should be a multiple on the order of magnitude of number of region servers. I.e. for a cluster
with three region servers and two masters the default configuration should have HBase put three
times the load on HDFS during major compactions that rewrite MOB data when compared to Master
handled MOB compaction; it should also be approximately three times as fast.
* When the MOB system detects that a table has hfiles with references to MOB data but the reference
hfiles do not yet have the needed file level metadata (i.e. from use of the MOB feature prior to
HBASE-22749) then it will refuse to archive _any_ MOB hfiles from that table. The normal course of
periodic compactions done by Region Servers will update existing hfiles with MOB references, but
until a given table has been through the needed compactions operators should expect to see an
increased amount of storage used by the MOB feature.
* Performing a compaction with type "MOB" no longer has special handling to compact specifically the
MOB hfiles. Instead it will issue a warning and do a compaction of the table. For example using
the HBase shell as follows will result in a warning in the Master logs followed by a major
compaction of the 'example' table in its entirety or for the 'big' column respectively.
+
----
hbase> major_compact 'example', nil, 'MOB'
hbase> major_compact 'example', 'big', 'MOB'
----
+
The same is true for directly using the Java API for
`admin.majorCompact(TableName.valueOf("example"), CompactType.MOB)`.
* Similarly, manually performing a major compaction on a table or region will also handle compacting
the MOB stored values for that table or region respectively.
The following configuration setting has been deprecated and replaced:
* `hbase.master.mob.ttl.cleaner.period` has been replaced with `hbase.master.mob.cleaner.period`
The following configuration settings are no longer used:
* `hbase.mob.compaction.mergeable.threshold`
* `hbase.mob.delfile.max.count`
* `hbase.mob.compaction.batch.size`
* `hbase.mob.compactor.class`
* `hbase.mob.compaction.threads.max`

View File

@ -1,212 +0,0 @@
////
/**
*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
////
[appendix]
[[hbck.in.depth]]
== hbck In Depth
:doctype: book
:numbered:
:toc: left
:icons: font
:experimental:
HBaseFsck (hbck) is a tool for checking for region consistency and table integrity problems and repairing a corrupted HBase.
It works in two basic modes -- a read-only inconsistency identifying mode and a multi-phase read-write repair mode.
=== Running hbck to identify inconsistencies
To check to see if your HBase cluster has corruptions, run hbck against your HBase cluster:
[source,bourne]
----
$ ./bin/hbase hbck
----
At the end of the commands output it prints OK or tells you the number of INCONSISTENCIES present.
You may also want to run hbck a few times because some inconsistencies can be transient (e.g.
cluster is starting up or a region is splitting). Operationally you may want to run hbck regularly and setup alert (e.g.
via nagios) if it repeatedly reports inconsistencies . A run of hbck will report a list of inconsistencies along with a brief description of the regions and tables affected.
The using the `-details` option will report more details including a representative listing of all the splits present in all the tables.
[source,bourne]
----
$ ./bin/hbase hbck -details
----
If you just want to know if some tables are corrupted, you can limit hbck to identify inconsistencies in only specific tables.
For example the following command would only attempt to check table TableFoo and TableBar.
The benefit is that hbck will run in less time.
[source,bourne]
----
$ ./bin/hbase hbck TableFoo TableBar
----
=== Inconsistencies
If after several runs, inconsistencies continue to be reported, you may have encountered a corruption.
These should be rare, but in the event they occur newer versions of HBase include the hbck tool enabled with automatic repair options.
There are two invariants that when violated create inconsistencies in HBase:
* HBase's region consistency invariant is satisfied if every region is assigned and deployed on exactly one region server, and all places where this state kept is in accordance.
* HBase's table integrity invariant is satisfied if for each table, every possible row key resolves to exactly one region.
Repairs generally work in three phases -- a read-only information gathering phase that identifies inconsistencies, a table integrity repair phase that restores the table integrity invariant, and then finally a region consistency repair phase that restores the region consistency invariant.
Starting from version 0.90.0, hbck could detect region consistency problems report on a subset of possible table integrity problems.
It also included the ability to automatically fix the most common inconsistency, region assignment and deployment consistency problems.
This repair could be done by using the `-fix` command line option.
These problems close regions if they are open on the wrong server or on multiple region servers and also assigns regions to region servers if they are not open.
Starting from HBase versions 0.90.7, 0.92.2 and 0.94.0, several new command line options are introduced to aid repairing a corrupted HBase.
This hbck sometimes goes by the nickname ``uberhbck''. Each particular version of uber hbck is compatible with the HBase's of the same major version (0.90.7 uberhbck can repair a 0.90.4). However, versions <=0.90.6 and versions <=0.92.1 may require restarting the master or failing over to a backup master.
=== Localized repairs
When repairing a corrupted HBase, it is best to repair the lowest risk inconsistencies first.
These are generally region consistency repairs -- localized single region repairs, that only modify in-memory data, ephemeral zookeeper data, or patch holes in the META table.
Region consistency requires that the HBase instance has the state of the region's data in HDFS (.regioninfo files), the region's row in the hbase:meta table., and region's deployment/assignments on region servers and the master in accordance.
Options for repairing region consistency include:
* `-fixAssignments` (equivalent to the 0.90 `-fix` option) repairs unassigned, incorrectly assigned or multiply assigned regions.
* `-fixMeta` which removes meta rows when corresponding regions are not present in HDFS and adds new meta rows if they regions are present in HDFS while not in META. To fix deployment and assignment problems you can run this command:
[source,bourne]
----
$ ./bin/hbase hbck -fixAssignments
----
To fix deployment and assignment problems as well as repairing incorrect meta rows you can run this command:
[source,bourne]
----
$ ./bin/hbase hbck -fixAssignments -fixMeta
----
There are a few classes of table integrity problems that are low risk repairs.
The first two are degenerate (startkey == endkey) regions and backwards regions (startkey > endkey). These are automatically handled by sidelining the data to a temporary directory (/hbck/xxxx). The third low-risk class is hdfs region holes.
This can be repaired by using the:
* `-fixHdfsHoles` option for fabricating new empty regions on the file system.
If holes are detected you can use -fixHdfsHoles and should include -fixMeta and -fixAssignments to make the new region consistent.
[source,bourne]
----
$ ./bin/hbase hbck -fixAssignments -fixMeta -fixHdfsHoles
----
Since this is a common operation, we've added a the `-repairHoles` flag that is equivalent to the previous command:
[source,bourne]
----
$ ./bin/hbase hbck -repairHoles
----
If inconsistencies still remain after these steps, you most likely have table integrity problems related to orphaned or overlapping regions.
=== Region Overlap Repairs
Table integrity problems can require repairs that deal with overlaps.
This is a riskier operation because it requires modifications to the file system, requires some decision making, and may require some manual steps.
For these repairs it is best to analyze the output of a `hbck -details` run so that you isolate repairs attempts only upon problems the checks identify.
Because this is riskier, there are safeguard that should be used to limit the scope of the repairs.
WARNING: This is a relatively new and have only been tested on online but idle HBase instances (no reads/writes). Use at your own risk in an active production environment! The options for repairing table integrity violations include:
* `-fixHdfsOrphans` option for ``adopting'' a region directory that is missing a region metadata file (the .regioninfo file).
* `-fixHdfsOverlaps` ability for fixing overlapping regions
When repairing overlapping regions, a region's data can be modified on the file system in two ways: 1) by merging regions into a larger region or 2) by sidelining regions by moving data to ``sideline'' directory where data could be restored later.
Merging a large number of regions is technically correct but could result in an extremely large region that requires series of costly compactions and splitting operations.
In these cases, it is probably better to sideline the regions that overlap with the most other regions (likely the largest ranges) so that merges can happen on a more reasonable scale.
Since these sidelined regions are already laid out in HBase's native directory and HFile format, they can be restored by using HBase's bulk load mechanism.
The default safeguard thresholds are conservative.
These options let you override the default thresholds and to enable the large region sidelining feature.
* `-maxMerge <n>` maximum number of overlapping regions to merge
* `-sidelineBigOverlaps` if more than maxMerge regions are overlapping, sideline attempt to sideline the regions overlapping with the most other regions.
* `-maxOverlapsToSideline <n>` if sidelining large overlapping regions, sideline at most n regions.
Since often times you would just want to get the tables repaired, you can use this option to turn on all repair options:
* `-repair` includes all the region consistency options and only the hole repairing table integrity options.
Finally, there are safeguards to limit repairs to only specific tables.
For example the following command would only attempt to check and repair table TableFoo and TableBar.
----
$ ./bin/hbase hbck -repair TableFoo TableBar
----
==== Special cases: Meta is not properly assigned
There are a few special cases that hbck can handle as well.
Sometimes the meta table's only region is inconsistently assigned or deployed.
In this case there is a special `-fixMetaOnly` option that can try to fix meta assignments.
----
$ ./bin/hbase hbck -fixMetaOnly -fixAssignments
----
==== Special cases: HBase version file is missing
HBase's data on the file system requires a version file in order to start.
If this file is missing, you can use the `-fixVersionFile` option to fabricating a new HBase version file.
This assumes that the version of hbck you are running is the appropriate version for the HBase cluster.
==== Special case: Root and META are corrupt.
The most drastic corruption scenario is the case where the ROOT or META is corrupted and HBase will not start.
In this case you can use the OfflineMetaRepair tool create new ROOT and META regions and tables.
This tool assumes that HBase is offline.
It then marches through the existing HBase home directory, loads as much information from region metadata files (.regioninfo files) as possible from the file system.
If the region metadata has proper table integrity, it sidelines the original root and meta table directories, and builds new ones with pointers to the region directories and their data.
----
$ ./bin/hbase org.apache.hadoop.hbase.util.hbck.OfflineMetaRepair
----
NOTE: This tool is not as clever as uberhbck but can be used to bootstrap repairs that uberhbck can complete.
If the tool succeeds you should be able to start hbase and run online repairs if necessary.
==== Special cases: Offline split parent
Once a region is split, the offline parent will be cleaned up automatically.
Sometimes, daughter regions are split again before their parents are cleaned up.
HBase can clean up parents in the right order.
However, there could be some lingering offline split parents sometimes.
They are in META, in HDFS, and not deployed.
But HBase can't clean them up.
In this case, you can use the `-fixSplitParents` option to reset them in META to be online and not split.
Therefore, hbck can merge them with other regions if fixing overlapping regions option is used.
This option should not normally be used, and it is not in `-fixAll`.
:numbered:

View File

@ -1,269 +0,0 @@
////
/**
*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
////
[[hbtop]]
= hbtop
:doctype: book
:numbered:
:toc: left
:icons: font
:experimental:
== Overview
`hbtop` is a real-time monitoring tool for HBase like Unix's top command.
It can display summary information as well as metrics per Region/Namespace/Table/RegionServer.
In this tool, you can see the metrics sorted by a selected field and filter the metrics to see only metrics you really want to see.
Also, with the drill-down feature, you can find hot regions easily in a top-down manner.
== Usage
You can run hbtop with the following command:
----
$ hbase hbtop
----
In this case, the values of `hbase.client.zookeeper.quorum` and `zookeeper.znode.parent` in `hbase-site.xml` in the classpath or the default values of them are used to connect.
Or, you can specify your own zookeeper quorum and znode parent as follows:
----
$ hbase hbtop -Dhbase.client.zookeeper.quorum=<zookeeper quorum> -Dzookeeper.znode.parent=<znode parent>
----
image::https://hbase.apache.org/hbtop-images/top_screen.gif[Top screen]
The top screen consists of a summary part and of a metrics part.
In the summary part, you can see `HBase Version`, `Cluster ID`, `The number of region servers`, `Region count`, `Average Cluster Load` and `Aggregated Request/s`.
In the metrics part, you can see metrics per Region/Namespace/Table/RegionServer depending on the selected mode.
The top screen is refreshed in a certain period 3 seconds by default.
=== Scrolling metric records
You can scroll the metric records in the metrics part.
image::https://hbase.apache.org/hbtop-images/scrolling_metric_records.gif[Scrolling metric records]
=== Command line arguments
[options="header"]
|=================================
| Argument | Description
| -d,--delay &lt;arg&gt; | The refresh delay (in seconds); default is 3 seconds
| -h,--help | Print usage; for help while the tool is running press `h` key
| -m,--mode &lt;arg&gt; | The mode; `n` (Namespace)&#124;`t` (Table)&#124;r (Region)&#124;`s` (RegionServer), default is `r` (Region)
|=================================
=== Modes
There are the following 4 modes in hbtop:
[options="header"]
|=================================
| Mode | Description
| Region | Showing metric records per region
| Namespace | Showing metric records per namespace
| Table | Showing metric records per table
| RegionServer | Showing metric records per region server
|=================================
==== Region mode
In Region mode, the default sort field is `#REQ/S`.
The fields in this mode are as follows:
[options="header"]
|=================================
| Field | Description | Displayed by default
| RNAME | Region Name | false
| NAMESPACE | Namespace Name | true
| TABLE | Table Name | true
| SCODE | Start Code | false
| REPID | Replica ID | false
| REGION | Encoded Region Name | true
| RS | Short Region Server Name | true
| LRS | Long Region Server Name | false
| #REQ/S | Request Count per second | true
| #READ/S | Read Request Count per second | true
| #FREAD/S | Filtered Read Request Count per second | true
| #WRITE/S | Write Request Count per second | true
| SF | StoreFile Size | true
| USF | Uncompressed StoreFile Size | false
| #SF | Number of StoreFiles | true
| MEMSTORE | MemStore Size | true
| LOCALITY | Block Locality | true
| SKEY | Start Key | false
| #COMPingCELL | Compacting Cell Count | false
| #COMPedCELL | Compacted Cell Count | false
| %COMP | Compaction Progress | false
| LASTMCOMP | Last Major Compaction Time | false
|=================================
==== Namespace mode
In Namespace mode, the default sort field is `#REQ/S`.
The fields in this mode are as follows:
[options="header"]
|=================================
| Field | Description | Displayed by default
| NAMESPACE | Namespace Name | true
| #REGION | Region Count | true
| #REQ/S | Request Count per second | true
| #READ/S | Read Request Count per second | true
| #FREAD/S | Filtered Read Request Count per second | true
| #WRITE/S | Write Request Count per second | true
| SF | StoreFile Size | true
| USF | Uncompressed StoreFile Size | false
| #SF | Number of StoreFiles | true
| MEMSTORE | MemStore Size | true
|=================================
==== Table mode
In Table mode, the default sort field is `#REQ/S`.
The fields in this mode are as follows:
[options="header"]
|=================================
| Field | Description | Displayed by default
| NAMESPACE | Namespace Name | true
| TABLE | Table Name | true
| #REGION | Region Count | true
| #REQ/S | Request Count per second | true
| #READ/S | Read Request Count per second | true
| #FREAD/S | Filtered Read Request Count per second | true
| #WRITE/S | Write Request Count per second | true
| SF | StoreFile Size | true
| USF | Uncompressed StoreFile Size | false
| #SF | Number of StoreFiles | true
| MEMSTORE | MemStore Size | true
|=================================
==== RegionServer mode
In RegionServer mode, the default sort field is `#REQ/S`.
The fields in this mode are as follows:
[options="header"]
|=================================
| Field | Description | Displayed by default
| RS | Short Region Server Name | true
| LRS | Long Region Server Name | false
| #REGION | Region Count | true
| #REQ/S | Request Count per second | true
| #READ/S | Read Request Count per second | true
| #FREAD/S | Filtered Read Request Count per second | true
| #WRITE/S | Write Request Count per second | true
| SF | StoreFile Size | true
| USF | Uncompressed StoreFile Size | false
| #SF | Number of StoreFiles | true
| MEMSTORE | MemStore Size | true
| UHEAP | Used Heap Size | true
| MHEAP | Max Heap Size | true
|=================================
=== Changing mode
You can change mode by pressing `m` key in the top screen.
image::https://hbase.apache.org/hbtop-images/changing_mode.gif[Changing mode]
=== Changing the refresh delay
You can change the refresh by pressing `d` key in the top screen.
image::https://hbase.apache.org/hbtop-images/changing_refresh_delay.gif[Changing the refresh delay]
=== Changing the displayed fields
You can move to the field screen by pressing `f` key in the top screen. In the fields screen, you can change the displayed fields by choosing a field and pressing `d` key or `space` key.
image::https://hbase.apache.org/hbtop-images/changing_displayed_fields.gif[Changing the displayed fields]
=== Changing the sort field
You can move to the fields screen by pressing `f` key in the top screen. In the field screen, you can change the sort field by choosing a field and pressing `s`. Also, you can change the sort order (ascending or descending) by pressing `R` key.
image::https://hbase.apache.org/hbtop-images/changing_sort_field.gif[Changing the sort field]
=== Changing the order of the fields
You can move to the fields screen by pressing `f` key in the top screen. In the field screen, you can change the order of the fields.
image::https://hbase.apache.org/hbtop-images/changing_order_of_fields.gif[Changing the sort field]
=== Filters
You can filter the metric records with the filter feature. We can add filters by pressing `o` key for ignoring case or `O` key for case sensitive.
image::https://hbase.apache.org/hbtop-images/adding_filters.gif[Adding filters]
The syntax is as follows:
----
<Field><Operator><Value>
----
For example, we can add filters like the following:
----
NAMESPACE==default
REQ/S>1000
----
The operators we can specify are as follows:
[options="header"]
|=================================
| Operator | Description
| = | Partial match
| == | Exact match
| > | Greater than
| >= | Greater than or equal to
| < | Less than
| <= | Less than and equal to
|=================================
You can see the current filters by pressing `^o` key and clear them by pressing `=` key.
image::https://hbase.apache.org/hbtop-images/showing_and_clearing_filters.gif[Showing and clearing filters]
=== Drilling down
You can drill down the metric record by choosing a metric record that you want to drill down and pressing `i` key in the top screen. With this feature, you can find hot regions easily in a top-down manner.
image::https://hbase.apache.org/hbtop-images/driling_down.gif[Drilling down]
=== Help screen
You can see the help screen by pressing `h` key in the top screen.
image::https://hbase.apache.org/hbtop-images/help_screen.gif[Help screen]
== Others
=== How hbtop gets the metrics data
hbtop gets the metrics from ClusterMetrics which is returned as the result of a call to Admin#getClusterMetrics() on the current HMaster. To add metrics to hbtop, they will need to be exposed via ClusterMetrics.

View File

@ -1 +0,0 @@
../../../site/resources/images/

View File

@ -1,109 +0,0 @@
////
/**
*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
////
[[inmemory_compaction]]
= In-memory Compaction
:doctype: book
:numbered:
:toc: left
:icons: font
:experimental:
[[imc.overview]]
== Overview
In-memory Compaction (A.K.A Accordion) is a new feature in hbase-2.0.0.
It was first introduced on the Apache HBase Blog at
link:https://blogs.apache.org/hbase/entry/accordion-hbase-breathes-with-in[Accordion: HBase Breathes with In-Memory Compaction].
Quoting the blog:
____
Accordion reapplies the LSM principal [_Log-Structured-Merge Tree_, the design pattern upon which HBase is based] to MemStore, in order to eliminate redundancies and other overhead while the data is still in RAM. Doing so decreases the frequency of flushes to HDFS, thereby reducing the write amplification and the overall disk footprint. With less flushes, the write operations are stalled less frequently as the MemStore overflows, therefore the write performance is improved. Less data on disk also implies less pressure on the block cache, higher hit rates, and eventually better read response times. Finally, having less disk writes also means having less compaction happening in the background, i.e., less cycles are stolen from productive (read and write) work. All in all, the effect of in-memory compaction can be envisioned as a catalyst that enables the system move faster as a whole.
____
A developer view is available at
link:https://blogs.apache.org/hbase/entry/accordion-developer-view-of-in[Accordion: Developer View of In-Memory Compaction].
In-memory compaction works best when high data churn; overwrites or over-versions
can be eliminated while the data is still in memory. If the writes are all uniques,
it may drag write throughput (In-memory compaction costs CPU). We suggest you test
and compare before deploying to production.
In this section we describe how to enable Accordion and the available configurations.
== Enabling
To enable in-memory compactions, set the _IN_MEMORY_COMPACTION_ attribute
on per column family where you want the behavior. The _IN_MEMORY_COMPACTION_
attribute can have one of four values.
* _NONE_: No in-memory compaction.
* _BASIC_: Basic policy enables flushing and keeps a pipeline of flushes until we trip the pipeline maximum threshold and then we flush to disk. No in-memory compaction but can help throughput as data is moved from the profligate, native ConcurrentSkipListMap data-type to more compact (and efficient) data types.
* _EAGER_: This is _BASIC_ policy plus in-memory compaction of flushes (much like the on-disk compactions done to hfiles); on compaction we apply on-disk rules eliminating versions, duplicates, ttl'd cells, etc.
* _ADAPTIVE_: Adaptive compaction adapts to the workload. It applies either index compaction or data compaction based on the ratio of duplicate cells in the data. Experimental.
To enable _BASIC_ on the _info_ column family in the table _radish_, add the attribute to the _info_ column family:
[source,ruby]
----
hbase(main):003:0> alter 'radish', {NAME => 'info', IN_MEMORY_COMPACTION => 'BASIC'}
Updating all regions with the new schema...
All regions updated.
Done.
Took 1.2413 seconds
hbase(main):004:0> describe 'radish'
Table radish is DISABLED
radish
COLUMN FAMILIES DESCRIPTION
{NAME => 'info', VERSIONS => '1', EVICT_BLOCKS_ON_CLOSE => 'false', NEW_VERSION_BEHAVIOR => 'false', KEEP_DELETED_CELLS => 'FALSE', CACHE_DATA_ON_WRITE => 'false', DATA_BLOCK_ENCODING => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', REPLICATION_SCOPE => '0', BLOOMFILTER => 'ROW', CACHE_INDEX_ON_WRITE => 'false', IN_MEMORY => 'false', CACHE_BLOOMS_ON_WRITE => 'false', PREFETCH_BLOCKS_ON_OPEN => 'false', COMPRESSION => 'NONE', BLOCKCACHE => 'true', BLOCKSIZE => '65536', METADATA => {
'IN_MEMORY_COMPACTION' => 'BASIC'}}
1 row(s)
Took 0.0239 seconds
----
Note how the IN_MEMORY_COMPACTION attribute shows as part of the _METADATA_ map.
There is also a global configuration, _hbase.hregion.compacting.memstore.type_ which you can set in your _hbase-site.xml_ file. Use it to set the
default on creation of a new table (On creation of a column family Store, we look first to the column family configuration looking for the
_IN_MEMORY_COMPACTION_ setting, and if none, we then consult the _hbase.hregion.compacting.memstore.type_ value using its content; default is
_BASIC_).
By default, new hbase system tables will have _BASIC_ in-memory compaction set. To specify otherwise,
on new table-creation, set _hbase.hregion.compacting.memstore.type_ to _NONE_ (Note, setting this value
post-creation of system tables will not have a retroactive effect; you will have to alter your tables
to set the in-memory attribute to _NONE_).
When an in-memory flush happens is calculated by dividing the configured region flush size (Set in the table descriptor
or read from _hbase.hregion.memstore.flush.size_) by the number of column families and then multiplying by
_hbase.memstore.inmemoryflush.threshold.factor_. Default is 0.014.
The number of flushes carried by the pipeline is monitored so as to fit within the bounds of memstore sizing
but you can also set a maximum on the number of flushes total by setting
_hbase.hregion.compacting.pipeline.segments.limit_. Default is 2.
When a column family Store is created, it says what memstore type is in effect. As of this writing
there is the old-school _DefaultMemStore_ which fills a _ConcurrentSkipListMap_ and then flushes
to disk or the new _CompactingMemStore_ that is the implementation that provides this new
in-memory compactions facility. Here is a log-line from a RegionServer that shows a column
family Store named _family_ configured to use a _CompactingMemStore_:
----
Note how the IN_MEMORY_COMPACTION attribute shows as part of the _METADATA_ map.
2018-03-30 11:02:24,466 INFO [Time-limited test] regionserver.HStore(325): Store=family, memstore type=CompactingMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=10
----
Enable TRACE-level logging on the CompactingMemStore class (_org.apache.hadoop.hbase.regionserver.CompactingMemStore_) to see detail on its operation.

View File

@ -1,689 +0,0 @@
////
/**
*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
////
[[mapreduce]]
= HBase and MapReduce
:doctype: book
:numbered:
:toc: left
:icons: font
:experimental:
Apache MapReduce is a software framework used to analyze large amounts of data. It is provided by link:https://hadoop.apache.org/[Apache Hadoop].
MapReduce itself is out of the scope of this document.
A good place to get started with MapReduce is https://hadoop.apache.org/docs/r2.6.0/hadoop-mapreduce-client/hadoop-mapreduce-client-core/MapReduceTutorial.html.
MapReduce version 2 (MR2)is now part of link:https://hadoop.apache.org/docs/r2.3.0/hadoop-yarn/hadoop-yarn-site/[YARN].
This chapter discusses specific configuration steps you need to take to use MapReduce on data within HBase.
In addition, it discusses other interactions and issues between HBase and MapReduce
jobs. Finally, it discusses <<cascading,Cascading>>, an
link:http://www.cascading.org/[alternative API] for MapReduce.
.`mapred` and `mapreduce`
[NOTE]
====
There are two mapreduce packages in HBase as in MapReduce itself: _org.apache.hadoop.hbase.mapred_ and _org.apache.hadoop.hbase.mapreduce_.
The former does old-style API and the latter the new mode.
The latter has more facility though you can usually find an equivalent in the older package.
Pick the package that goes with your MapReduce deploy.
When in doubt or starting over, pick _org.apache.hadoop.hbase.mapreduce_.
In the notes below, we refer to _o.a.h.h.mapreduce_ but replace with
_o.a.h.h.mapred_ if that is what you are using.
====
[[hbase.mapreduce.classpath]]
== HBase, MapReduce, and the CLASSPATH
By default, MapReduce jobs deployed to a MapReduce cluster do not have access to
either the HBase configuration under `$HBASE_CONF_DIR` or the HBase classes.
To give the MapReduce jobs the access they need, you could add _hbase-site.xml_to _$HADOOP_HOME/conf_ and add HBase jars to the _$HADOOP_HOME/lib_ directory.
You would then need to copy these changes across your cluster. Or you could edit _$HADOOP_HOME/conf/hadoop-env.sh_ and add hbase dependencies to the `HADOOP_CLASSPATH` variable.
Neither of these approaches is recommended because it will pollute your Hadoop install with HBase references.
It also requires you restart the Hadoop cluster before Hadoop can use the HBase data.
The recommended approach is to let HBase add its dependency jars and use `HADOOP_CLASSPATH` or `-libjars`.
Since HBase `0.90.x`, HBase adds its dependency JARs to the job configuration itself.
The dependencies only need to be available on the local `CLASSPATH` and from here they'll be picked
up and bundled into the fat job jar deployed to the MapReduce cluster. A basic trick just passes
the full hbase classpath -- all hbase and dependent jars as well as configurations -- to the mapreduce
job runner letting hbase utility pick out from the full-on classpath what it needs adding them to the
MapReduce job configuration (See the source at `TableMapReduceUtil#addDependencyJars(org.apache.hadoop.mapreduce.Job)` for how this is done).
The following example runs the bundled HBase link:https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/mapreduce/RowCounter.html[RowCounter] MapReduce job against a table named `usertable`.
It sets into `HADOOP_CLASSPATH` the jars hbase needs to run in an MapReduce context (including configuration files such as hbase-site.xml).
Be sure to use the correct version of the HBase JAR for your system; replace the VERSION string in the below command line w/ the version of
your local hbase install. The backticks (``` symbols) cause the shell to execute the sub-commands, setting the output of `hbase classpath` into `HADOOP_CLASSPATH`.
This example assumes you use a BASH-compatible shell.
[source,bash]
----
$ HADOOP_CLASSPATH=`${HBASE_HOME}/bin/hbase classpath` \
${HADOOP_HOME}/bin/hadoop jar ${HBASE_HOME}/lib/hbase-mapreduce-VERSION.jar \
org.apache.hadoop.hbase.mapreduce.RowCounter usertable
----
The above command will launch a row counting mapreduce job against the hbase cluster that is pointed to by your local configuration on a cluster that the hadoop configs are pointing to.
The main for the `hbase-mapreduce.jar` is a Driver that lists a few basic mapreduce tasks that ship with hbase.
For example, presuming your install is hbase `2.0.0-SNAPSHOT`:
[source,bash]
----
$ HADOOP_CLASSPATH=`${HBASE_HOME}/bin/hbase classpath` \
${HADOOP_HOME}/bin/hadoop jar ${HBASE_HOME}/lib/hbase-mapreduce-2.0.0-SNAPSHOT.jar
An example program must be given as the first argument.
Valid program names are:
CellCounter: Count cells in HBase table.
WALPlayer: Replay WAL files.
completebulkload: Complete a bulk data load.
copytable: Export a table from local cluster to peer cluster.
export: Write table data to HDFS.
exportsnapshot: Export the specific snapshot to a given FileSystem.
import: Import data written by Export.
importtsv: Import data in TSV format.
rowcounter: Count rows in HBase table.
verifyrep: Compare the data from tables in two different clusters. WARNING: It doesn't work for incrementColumnValues'd cells since the timestamp is changed after being appended to the log.
----
You can use the above listed shortnames for mapreduce jobs as in the below re-run of the row counter job (again, presuming your install is hbase `2.0.0-SNAPSHOT`):
[source,bash]
----
$ HADOOP_CLASSPATH=`${HBASE_HOME}/bin/hbase classpath` \
${HADOOP_HOME}/bin/hadoop jar ${HBASE_HOME}/lib/hbase-mapreduce-2.0.0-SNAPSHOT.jar \
rowcounter usertable
----
You might find the more selective `hbase mapredcp` tool output of interest; it lists the minimum set of jars needed
to run a basic mapreduce job against an hbase install. It does not include configuration. You'll probably need to add
these if you want your MapReduce job to find the target cluster. You'll probably have to also add pointers to extra jars
once you start to do anything of substance. Just specify the extras by passing the system propery `-Dtmpjars` when
you run `hbase mapredcp`.
For jobs that do not package their dependencies or call `TableMapReduceUtil#addDependencyJars`, the following command structure is necessary:
[source,bash]
----
$ HADOOP_CLASSPATH=`${HBASE_HOME}/bin/hbase mapredcp`:${HBASE_HOME}/conf hadoop jar MyApp.jar MyJobMainClass -libjars $(${HBASE_HOME}/bin/hbase mapredcp | tr ':' ',') ...
----
[NOTE]
====
The example may not work if you are running HBase from its build directory rather than an installed location.
You may see an error like the following:
----
java.lang.RuntimeException: java.lang.ClassNotFoundException: org.apache.hadoop.hbase.mapreduce.RowCounter$RowCounterMapper
----
If this occurs, try modifying the command as follows, so that it uses the HBase JARs from the _target/_ directory within the build environment.
[source,bash]
----
$ HADOOP_CLASSPATH=${HBASE_BUILD_HOME}/hbase-mapreduce/target/hbase-mapreduce-VERSION-SNAPSHOT.jar:`${HBASE_BUILD_HOME}/bin/hbase classpath` ${HADOOP_HOME}/bin/hadoop jar ${HBASE_BUILD_HOME}/hbase-mapreduce/target/hbase-mapreduce-VERSION-SNAPSHOT.jar rowcounter usertable
----
====
.Notice to MapReduce users of HBase between 0.96.1 and 0.98.4
[CAUTION]
====
Some MapReduce jobs that use HBase fail to launch.
The symptom is an exception similar to the following:
----
Exception in thread "main" java.lang.IllegalAccessError: class
com.google.protobuf.ZeroCopyLiteralByteString cannot access its superclass
com.google.protobuf.LiteralByteString
at java.lang.ClassLoader.defineClass1(Native Method)
at java.lang.ClassLoader.defineClass(ClassLoader.java:792)
at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
at java.net.URLClassLoader.defineClass(URLClassLoader.java:449)
at java.net.URLClassLoader.access$100(URLClassLoader.java:71)
at java.net.URLClassLoader$1.run(URLClassLoader.java:361)
at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at
org.apache.hadoop.hbase.protobuf.ProtobufUtil.toScan(ProtobufUtil.java:818)
at
org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.convertScanToString(TableMapReduceUtil.java:433)
at
org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob(TableMapReduceUtil.java:186)
at
org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob(TableMapReduceUtil.java:147)
at
org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob(TableMapReduceUtil.java:270)
at
org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob(TableMapReduceUtil.java:100)
...
----
This is caused by an optimization introduced in link:https://issues.apache.org/jira/browse/HBASE-9867[HBASE-9867] that inadvertently introduced a classloader dependency.
This affects both jobs using the `-libjars` option and "fat jar," those which package their runtime dependencies in a nested `lib` folder.
In order to satisfy the new classloader requirements, `hbase-protocol.jar` must be included in Hadoop's classpath.
See <<hbase.mapreduce.classpath>> for current recommendations for resolving classpath errors.
The following is included for historical purposes.
This can be resolved system-wide by including a reference to the `hbase-protocol.jar` in Hadoop's lib directory, via a symlink or by copying the jar into the new location.
This can also be achieved on a per-job launch basis by including it in the `HADOOP_CLASSPATH` environment variable at job submission time.
When launching jobs that package their dependencies, all three of the following job launching commands satisfy this requirement:
[source,bash]
----
$ HADOOP_CLASSPATH=/path/to/hbase-protocol.jar:/path/to/hbase/conf hadoop jar MyJob.jar MyJobMainClass
$ HADOOP_CLASSPATH=$(hbase mapredcp):/path/to/hbase/conf hadoop jar MyJob.jar MyJobMainClass
$ HADOOP_CLASSPATH=$(hbase classpath) hadoop jar MyJob.jar MyJobMainClass
----
For jars that do not package their dependencies, the following command structure is necessary:
[source,bash]
----
$ HADOOP_CLASSPATH=$(hbase mapredcp):/etc/hbase/conf hadoop jar MyApp.jar MyJobMainClass -libjars $(hbase mapredcp | tr ':' ',') ...
----
See also link:https://issues.apache.org/jira/browse/HBASE-10304[HBASE-10304] for further discussion of this issue.
====
== MapReduce Scan Caching
TableMapReduceUtil now restores the option to set scanner caching (the number of rows which are cached before returning the result to the client) on the Scan object that is passed in.
This functionality was lost due to a bug in HBase 0.95 (link:https://issues.apache.org/jira/browse/HBASE-11558[HBASE-11558]), which is fixed for HBase 0.98.5 and 0.96.3.
The priority order for choosing the scanner caching is as follows:
. Caching settings which are set on the scan object.
. Caching settings which are specified via the configuration option `hbase.client.scanner.caching`, which can either be set manually in _hbase-site.xml_ or via the helper method `TableMapReduceUtil.setScannerCaching()`.
. The default value `HConstants.DEFAULT_HBASE_CLIENT_SCANNER_CACHING`, which is set to `100`.
Optimizing the caching settings is a balance between the time the client waits for a result and the number of sets of results the client needs to receive.
If the caching setting is too large, the client could end up waiting for a long time or the request could even time out.
If the setting is too small, the scan needs to return results in several pieces.
If you think of the scan as a shovel, a bigger cache setting is analogous to a bigger shovel, and a smaller cache setting is equivalent to more shoveling in order to fill the bucket.
The list of priorities mentioned above allows you to set a reasonable default, and override it for specific operations.
See the API documentation for link:https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Scan.html[Scan] for more details.
== Bundled HBase MapReduce Jobs
The HBase JAR also serves as a Driver for some bundled MapReduce jobs.
To learn about the bundled MapReduce jobs, run the following command.
[source,bash]
----
$ ${HADOOP_HOME}/bin/hadoop jar ${HBASE_HOME}/hbase-mapreduce-VERSION.jar
An example program must be given as the first argument.
Valid program names are:
copytable: Export a table from local cluster to peer cluster
completebulkload: Complete a bulk data load.
export: Write table data to HDFS.
import: Import data written by Export.
importtsv: Import data in TSV format.
rowcounter: Count rows in HBase table
----
Each of the valid program names are bundled MapReduce jobs.
To run one of the jobs, model your command after the following example.
[source,bash]
----
$ ${HADOOP_HOME}/bin/hadoop jar ${HBASE_HOME}/hbase-mapreduce-VERSION.jar rowcounter myTable
----
== HBase as a MapReduce Job Data Source and Data Sink
HBase can be used as a data source, link:https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/mapreduce/TableInputFormat.html[TableInputFormat], and data sink, link:https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/mapreduce/TableOutputFormat.html[TableOutputFormat] or link:https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/mapreduce/MultiTableOutputFormat.html[MultiTableOutputFormat], for MapReduce jobs.
Writing MapReduce jobs that read or write HBase, it is advisable to subclass link:https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/mapreduce/TableMapper.html[TableMapper] and/or link:https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/mapreduce/TableReducer.html[TableReducer].
See the do-nothing pass-through classes link:https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/mapreduce/IdentityTableMapper.html[IdentityTableMapper] and link:https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/mapreduce/IdentityTableReducer.html[IdentityTableReducer] for basic usage.
For a more involved example, see link:https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/mapreduce/RowCounter.html[RowCounter] or review the `org.apache.hadoop.hbase.mapreduce.TestTableMapReduce` unit test.
If you run MapReduce jobs that use HBase as source or sink, need to specify source and sink table and column names in your configuration.
When you read from HBase, the `TableInputFormat` requests the list of regions from HBase and makes a map, which is either a `map-per-region` or `mapreduce.job.maps` map, whichever is smaller.
If your job only has two maps, raise `mapreduce.job.maps` to a number greater than the number of regions.
Maps will run on the adjacent TaskTracker/NodeManager if you are running a TaskTracer/NodeManager and RegionServer per node.
When writing to HBase, it may make sense to avoid the Reduce step and write back into HBase from within your map.
This approach works when your job does not need the sort and collation that MapReduce does on the map-emitted data.
On insert, HBase 'sorts' so there is no point double-sorting (and shuffling data around your MapReduce cluster) unless you need to.
If you do not need the Reduce, your map might emit counts of records processed for reporting at the end of the job, or set the number of Reduces to zero and use TableOutputFormat.
If running the Reduce step makes sense in your case, you should typically use multiple reducers so that load is spread across the HBase cluster.
A new HBase partitioner, the link:https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/mapreduce/HRegionPartitioner.html[HRegionPartitioner], can run as many reducers the number of existing regions.
The HRegionPartitioner is suitable when your table is large and your upload will not greatly alter the number of existing regions upon completion.
Otherwise use the default partitioner.
== Writing HFiles Directly During Bulk Import
If you are importing into a new table, you can bypass the HBase API and write your content directly to the filesystem, formatted into HBase data files (HFiles). Your import will run faster, perhaps an order of magnitude faster.
For more on how this mechanism works, see <<arch.bulk.load>>.
== RowCounter Example
The included link:https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/mapreduce/RowCounter.html[RowCounter] MapReduce job uses `TableInputFormat` and does a count of all rows in the specified table.
To run it, use the following command:
[source,bash]
----
$ ./bin/hadoop jar hbase-X.X.X.jar
----
This will invoke the HBase MapReduce Driver class.
Select `rowcounter` from the choice of jobs offered.
This will print rowcounter usage advice to standard output.
Specify the tablename, column to count, and output directory.
If you have classpath errors, see <<hbase.mapreduce.classpath>>.
[[splitter]]
== Map-Task Splitting
[[splitter.default]]
=== The Default HBase MapReduce Splitter
When link:https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/mapreduce/TableInputFormat.html[TableInputFormat] is used to source an HBase table in a MapReduce job, its splitter will make a map task for each region of the table.
Thus, if there are 100 regions in the table, there will be 100 map-tasks for the job - regardless of how many column families are selected in the Scan.
[[splitter.custom]]
=== Custom Splitters
For those interested in implementing custom splitters, see the method `getSplits` in link:https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/mapreduce/TableInputFormatBase.html[TableInputFormatBase].
That is where the logic for map-task assignment resides.
[[mapreduce.example]]
== HBase MapReduce Examples
[[mapreduce.example.read]]
=== HBase MapReduce Read Example
The following is an example of using HBase as a MapReduce source in read-only manner.
Specifically, there is a Mapper instance but no Reducer, and nothing is being emitted from the Mapper.
The job would be defined as follows...
[source,java]
----
Configuration config = HBaseConfiguration.create();
Job job = new Job(config, "ExampleRead");
job.setJarByClass(MyReadJob.class); // class that contains mapper
Scan scan = new Scan();
scan.setCaching(500); // 1 is the default in Scan, which will be bad for MapReduce jobs
scan.setCacheBlocks(false); // don't set to true for MR jobs
// set other scan attrs
...
TableMapReduceUtil.initTableMapperJob(
tableName, // input HBase table name
scan, // Scan instance to control CF and attribute selection
MyMapper.class, // mapper
null, // mapper output key
null, // mapper output value
job);
job.setOutputFormatClass(NullOutputFormat.class); // because we aren't emitting anything from mapper
boolean b = job.waitForCompletion(true);
if (!b) {
throw new IOException("error with job!");
}
----
...and the mapper instance would extend link:https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/mapreduce/TableMapper.html[TableMapper]...
[source,java]
----
public static class MyMapper extends TableMapper<Text, Text> {
public void map(ImmutableBytesWritable row, Result value, Context context) throws InterruptedException, IOException {
// process data for the row from the Result instance.
}
}
----
[[mapreduce.example.readwrite]]
=== HBase MapReduce Read/Write Example
The following is an example of using HBase both as a source and as a sink with MapReduce.
This example will simply copy data from one table to another.
[source,java]
----
Configuration config = HBaseConfiguration.create();
Job job = new Job(config,"ExampleReadWrite");
job.setJarByClass(MyReadWriteJob.class); // class that contains mapper
Scan scan = new Scan();
scan.setCaching(500); // 1 is the default in Scan, which will be bad for MapReduce jobs
scan.setCacheBlocks(false); // don't set to true for MR jobs
// set other scan attrs
TableMapReduceUtil.initTableMapperJob(
sourceTable, // input table
scan, // Scan instance to control CF and attribute selection
MyMapper.class, // mapper class
null, // mapper output key
null, // mapper output value
job);
TableMapReduceUtil.initTableReducerJob(
targetTable, // output table
null, // reducer class
job);
job.setNumReduceTasks(0);
boolean b = job.waitForCompletion(true);
if (!b) {
throw new IOException("error with job!");
}
----
An explanation is required of what `TableMapReduceUtil` is doing, especially with the reducer. link:https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/mapreduce/TableOutputFormat.html[TableOutputFormat] is being used as the outputFormat class, and several parameters are being set on the config (e.g., `TableOutputFormat.OUTPUT_TABLE`), as well as setting the reducer output key to `ImmutableBytesWritable` and reducer value to `Writable`.
These could be set by the programmer on the job and conf, but `TableMapReduceUtil` tries to make things easier.
The following is the example mapper, which will create a `Put` and matching the input `Result` and emit it.
Note: this is what the CopyTable utility does.
[source,java]
----
public static class MyMapper extends TableMapper<ImmutableBytesWritable, Put> {
public void map(ImmutableBytesWritable row, Result value, Context context) throws IOException, InterruptedException {
// this example is just copying the data from the source table...
context.write(row, resultToPut(row,value));
}
private static Put resultToPut(ImmutableBytesWritable key, Result result) throws IOException {
Put put = new Put(key.get());
for (Cell cell : result.listCells()) {
put.add(cell);
}
return put;
}
}
----
There isn't actually a reducer step, so `TableOutputFormat` takes care of sending the `Put` to the target table.
This is just an example, developers could choose not to use `TableOutputFormat` and connect to the target table themselves.
[[mapreduce.example.readwrite.multi]]
=== HBase MapReduce Read/Write Example With Multi-Table Output
TODO: example for `MultiTableOutputFormat`.
[[mapreduce.example.summary]]
=== HBase MapReduce Summary to HBase Example
The following example uses HBase as a MapReduce source and sink with a summarization step.
This example will count the number of distinct instances of a value in a table and write those summarized counts in another table.
[source,java]
----
Configuration config = HBaseConfiguration.create();
Job job = new Job(config,"ExampleSummary");
job.setJarByClass(MySummaryJob.class); // class that contains mapper and reducer
Scan scan = new Scan();
scan.setCaching(500); // 1 is the default in Scan, which will be bad for MapReduce jobs
scan.setCacheBlocks(false); // don't set to true for MR jobs
// set other scan attrs
TableMapReduceUtil.initTableMapperJob(
sourceTable, // input table
scan, // Scan instance to control CF and attribute selection
MyMapper.class, // mapper class
Text.class, // mapper output key
IntWritable.class, // mapper output value
job);
TableMapReduceUtil.initTableReducerJob(
targetTable, // output table
MyTableReducer.class, // reducer class
job);
job.setNumReduceTasks(1); // at least one, adjust as required
boolean b = job.waitForCompletion(true);
if (!b) {
throw new IOException("error with job!");
}
----
In this example mapper a column with a String-value is chosen as the value to summarize upon.
This value is used as the key to emit from the mapper, and an `IntWritable` represents an instance counter.
[source,java]
----
public static class MyMapper extends TableMapper<Text, IntWritable> {
public static final byte[] CF = "cf".getBytes();
public static final byte[] ATTR1 = "attr1".getBytes();
private final IntWritable ONE = new IntWritable(1);
private Text text = new Text();
public void map(ImmutableBytesWritable row, Result value, Context context) throws IOException, InterruptedException {
String val = new String(value.getValue(CF, ATTR1));
text.set(val); // we can only emit Writables...
context.write(text, ONE);
}
}
----
In the reducer, the "ones" are counted (just like any other MR example that does this), and then emits a `Put`.
[source,java]
----
public static class MyTableReducer extends TableReducer<Text, IntWritable, ImmutableBytesWritable> {
public static final byte[] CF = "cf".getBytes();
public static final byte[] COUNT = "count".getBytes();
public void reduce(Text key, Iterable<IntWritable> values, Context context) throws IOException, InterruptedException {
int i = 0;
for (IntWritable val : values) {
i += val.get();
}
Put put = new Put(Bytes.toBytes(key.toString()));
put.add(CF, COUNT, Bytes.toBytes(i));
context.write(null, put);
}
}
----
[[mapreduce.example.summary.file]]
=== HBase MapReduce Summary to File Example
This very similar to the summary example above, with exception that this is using HBase as a MapReduce source but HDFS as the sink.
The differences are in the job setup and in the reducer.
The mapper remains the same.
[source,java]
----
Configuration config = HBaseConfiguration.create();
Job job = new Job(config,"ExampleSummaryToFile");
job.setJarByClass(MySummaryFileJob.class); // class that contains mapper and reducer
Scan scan = new Scan();
scan.setCaching(500); // 1 is the default in Scan, which will be bad for MapReduce jobs
scan.setCacheBlocks(false); // don't set to true for MR jobs
// set other scan attrs
TableMapReduceUtil.initTableMapperJob(
sourceTable, // input table
scan, // Scan instance to control CF and attribute selection
MyMapper.class, // mapper class
Text.class, // mapper output key
IntWritable.class, // mapper output value
job);
job.setReducerClass(MyReducer.class); // reducer class
job.setNumReduceTasks(1); // at least one, adjust as required
FileOutputFormat.setOutputPath(job, new Path("/tmp/mr/mySummaryFile")); // adjust directories as required
boolean b = job.waitForCompletion(true);
if (!b) {
throw new IOException("error with job!");
}
----
As stated above, the previous Mapper can run unchanged with this example.
As for the Reducer, it is a "generic" Reducer instead of extending TableMapper and emitting Puts.
[source,java]
----
public static class MyReducer extends Reducer<Text, IntWritable, Text, IntWritable> {
public void reduce(Text key, Iterable<IntWritable> values, Context context) throws IOException, InterruptedException {
int i = 0;
for (IntWritable val : values) {
i += val.get();
}
context.write(key, new IntWritable(i));
}
}
----
[[mapreduce.example.summary.noreducer]]
=== HBase MapReduce Summary to HBase Without Reducer
It is also possible to perform summaries without a reducer - if you use HBase as the reducer.
An HBase target table would need to exist for the job summary.
The Table method `incrementColumnValue` would be used to atomically increment values.
From a performance perspective, it might make sense to keep a Map of values with their values to be incremented for each map-task, and make one update per key at during the `cleanup` method of the mapper.
However, your mileage may vary depending on the number of rows to be processed and unique keys.
In the end, the summary results are in HBase.
[[mapreduce.example.summary.rdbms]]
=== HBase MapReduce Summary to RDBMS
Sometimes it is more appropriate to generate summaries to an RDBMS.
For these cases, it is possible to generate summaries directly to an RDBMS via a custom reducer.
The `setup` method can connect to an RDBMS (the connection information can be passed via custom parameters in the context) and the cleanup method can close the connection.
It is critical to understand that number of reducers for the job affects the summarization implementation, and you'll have to design this into your reducer.
Specifically, whether it is designed to run as a singleton (one reducer) or multiple reducers.
Neither is right or wrong, it depends on your use-case.
Recognize that the more reducers that are assigned to the job, the more simultaneous connections to the RDBMS will be created - this will scale, but only to a point.
[source,java]
----
public static class MyRdbmsReducer extends Reducer<Text, IntWritable, Text, IntWritable> {
private Connection c = null;
public void setup(Context context) {
// create DB connection...
}
public void reduce(Text key, Iterable<IntWritable> values, Context context) throws IOException, InterruptedException {
// do summarization
// in this example the keys are Text, but this is just an example
}
public void cleanup(Context context) {
// close db connection
}
}
----
In the end, the summary results are written to your RDBMS table/s.
[[mapreduce.htable.access]]
== Accessing Other HBase Tables in a MapReduce Job
Although the framework currently allows one HBase table as input to a MapReduce job, other HBase tables can be accessed as lookup tables, etc., in a MapReduce job via creating an Table instance in the setup method of the Mapper.
[source,java]
----
public class MyMapper extends TableMapper<Text, LongWritable> {
private Table myOtherTable;
public void setup(Context context) {
// In here create a Connection to the cluster and save it or use the Connection
// from the existing table
myOtherTable = connection.getTable("myOtherTable");
}
public void map(ImmutableBytesWritable row, Result value, Context context) throws IOException, InterruptedException {
// process Result...
// use 'myOtherTable' for lookups
}
----
[[mapreduce.specex]]
== Speculative Execution
It is generally advisable to turn off speculative execution for MapReduce jobs that use HBase as a source.
This can either be done on a per-Job basis through properties, or on the entire cluster.
Especially for longer running jobs, speculative execution will create duplicate map-tasks which will double-write your data to HBase; this is probably not what you want.
See <<spec.ex,spec.ex>> for more information.
[[cascading]]
== Cascading
link:http://www.cascading.org/[Cascading] is an alternative API for MapReduce, which
actually uses MapReduce, but allows you to write your MapReduce code in a simplified
way.
The following example shows a Cascading `Flow` which "sinks" data into an HBase cluster. The same
`hBaseTap` API could be used to "source" data as well.
[source, java]
----
// read data from the default filesystem
// emits two fields: "offset" and "line"
Tap source = new Hfs( new TextLine(), inputFileLhs );
// store data in an HBase cluster
// accepts fields "num", "lower", and "upper"
// will automatically scope incoming fields to their proper familyname, "left" or "right"
Fields keyFields = new Fields( "num" );
String[] familyNames = {"left", "right"};
Fields[] valueFields = new Fields[] {new Fields( "lower" ), new Fields( "upper" ) };
Tap hBaseTap = new HBaseTap( "multitable", new HBaseScheme( keyFields, familyNames, valueFields ), SinkMode.REPLACE );
// a simple pipe assembly to parse the input into fields
// a real app would likely chain multiple Pipes together for more complex processing
Pipe parsePipe = new Each( "insert", new Fields( "line" ), new RegexSplitter( new Fields( "num", "lower", "upper" ), " " ) );
// "plan" a cluster executable Flow
// this connects the source Tap and hBaseTap (the sink Tap) to the parsePipe
Flow parseFlow = new FlowConnector( properties ).connect( source, hBaseTap, parsePipe );
// start the flow, and block until complete
parseFlow.complete();
// open an iterator on the HBase table we stuffed data into
TupleEntryIterator iterator = parseFlow.openSink();
while(iterator.hasNext())
{
// print out each tuple from HBase
System.out.println( "iterator.next() = " + iterator.next() );
}
iterator.close();
----

View File

@ -1,225 +0,0 @@
////
/**
*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
////
[[offheap_read_write]]
= RegionServer Offheap Read/Write Path
:doctype: book
:numbered:
:toc: left
:icons: font
:experimental:
[[regionserver.offheap.overview]]
== Overview
To help reduce P99/P999 RPC latencies, HBase 2.x has made the read and write path use a pool of offheap buffers. Cells are
allocated in offheap memory outside of the purview of the JVM garbage collector with attendent reduction in GC pressure.
In the write path, the request packet received from client will be read in on a pre-allocated offheap buffer and retained
offheap until those cells are successfully persisted to the WAL and Memstore. The memory data structure in Memstore does
not directly store the cell memory, but references the cells encoded in the offheap buffers. Similarly for the read path.
Well try to read the block cache first and if a cache misses, we'll go to the HFile and read the respective block. The
workflow from reading blocks to sending cells to client does its best to avoid on-heap memory allocations reducing the
amount of work the GC has to do.
image::offheap-overview.png[]
For redress for the single mention of onheap in the read-section of the diagram above see <<regionserver.read.hdfs.block.offheap>>.
[[regionserver.offheap.readpath]]
== Offheap read-path
In HBase-2.0.0, link:https://issues.apache.org/jira/browse/HBASE-11425[HBASE-11425] changed the HBase read path so it
could hold the read-data off-heap avoiding copying of cached data (BlockCache) on to the java heap (for uncached data,
see note under the diagram in the section above). This reduces GC pauses given there is less garbage made and so less
to clear. The off-heap read path can have a performance that is similar or better to that of the on-heap LRU cache.
This feature is available since HBase 2.0.0. Refer to below blogs for more details and test results on off heaped read path
link:https://blogs.apache.org/hbase/entry/offheaping_the_read_path_in[Offheaping the Read Path in Apache HBase: Part 1 of 2]
and link:https://blogs.apache.org/hbase/entry/offheap-read-path-in-production[Offheap Read-Path in Production - The Alibaba story]
For an end-to-end off-heaped read-path, all you have to do is enable an off-heap backed <<offheap.blockcache>>(BC).
To do this, configure _hbase.bucketcache.ioengine_ to be _offheap_ in _hbase-site.xml_ (See <<bc.deploy.modes>> to learn
more about _hbase.bucketcache.ioengine_ options). Also specify the total capacity of the BC using `hbase.bucketcache.size`.
Please remember to adjust value of 'HBASE_OFFHEAPSIZE' in _hbase-env.sh_ (See <<bc.example>> for help sizing and an example
enabling). This configuration is for specifying the maximum possible off-heap memory allocation for the RegionServer java
process. This should be bigger than the off-heap BC size to accommodate usage by other features making use of off-heap memory
such as Server RPC buffer pool and short-circuit reads (See discussion in <<bc.example>>).
Please keep in mind that there is no default for `hbase.bucketcache.ioengine` which means the `BlockCache` is OFF by default
(See <<direct.memory>>).
This is all you need to do to enable off-heap read path. Most buffers in HBase are already off-heap. With BC off-heap,
the read pipeline will copy data between HDFS and the server socket -- caveat <<hbase.ipc.server.reservoir.initial.max>> --
sending results back to the client.
[[regionserver.offheap.rpc.bb.tuning]]
===== Tuning the RPC buffer pool
It is possible to tune the ByteBuffer pool on the RPC server side used to accumulate the cell bytes and create result
cell blocks to send back to the client side. Use `hbase.ipc.server.reservoir.enabled` to turn this pool ON or OFF. By
default this pool is ON and available. HBase will create off-heap ByteBuffers and pool them them by default. Please
make sure not to turn this OFF if you want end-to-end off-heaping in read path.
If this pool is turned off, the server will create temp buffers onheap to accumulate the cell bytes and
make a result cell block. This can impact the GC on a highly read loaded server.
NOTE: the config keys which start with prefix `hbase.ipc.server.reservoir` are deprecated in hbase-3.x (the
internal pool implementation changed). If you are still in hbase-2.2.x or older, then just use the old config
keys. Otherwise if in hbase-3.x or hbase-2.3.x+, please use the new config keys
(See <<regionserver.read.hdfs.block.offheap,deprecated and new configs in HBase3.x>>)
Next thing to tune is the ByteBuffer pool on the RPC server side. The user can tune this pool with respect to how
many buffers are in the pool and what should be the size of each ByteBuffer. Use the config
`hbase.ipc.server.reservoir.initial.buffer.size` to tune each of the buffer sizes. Default is 64KB for hbase-2.2.x
and less, changed to 65KB by default for hbase-2.3.x+
(see link:https://issues.apache.org/jira/browse/HBASE-22532[HBASE-22532])
When the result size is larger than one 64KB (Default) ByteBuffer size, the server will try to grab more than one
ByteBuffer and make a result cell block out of a collection of fixed-sized ByteBuffers. When the pool is running
out of buffers, the server will skip the pool and create temporary on-heap buffers.
The maximum number of ByteBuffers in the pool can be tuned using the config `hbase.ipc.server.reservoir.initial.max`.
Its default is a factor of region server handlers count (See the config `hbase.regionserver.handler.count`). The
math is such that by default we consider 2 MB as the result cell block size per read result and each handler will be
handling a read. For 2 MB size, we need 32 buffers each of size 64 KB (See default buffer size in pool). So per handler
32 ByteBuffers(BB). We allocate twice this size as the max BBs count such that one handler can be creating the response
and handing it to the RPC Responder thread and then handling a new request creating a new response cell block (using
pooled buffers). Even if the responder could not send back the first TCP reply immediately, our count should allow that
we should still have enough buffers in our pool without having to make temporary buffers on the heap. Again for smaller
sized random row reads, tune this max count. These are lazily created buffers and the count is the max count to be pooled.
If you still see GC issues even after making end-to-end read path off-heap, look for issues in the appropriate buffer
pool. Check for the below RegionServer log line at INFO level in HBase2.x:
[source]
----
Pool already reached its max capacity : XXX and no free buffers now. Consider increasing the value for 'hbase.ipc.server.reservoir.initial.max' ?
----
Or the following log message in HBase3.x:
[source]
----
Pool already reached its max capacity : XXX and no free buffers now. Consider increasing the value for 'hbase.server.allocator.max.buffer.count' ?
----
[[hbase.offheapsize]]
The setting for _HBASE_OFFHEAPSIZE_ in _hbase-env.sh_ should consider this off heap buffer pool on the server side also.
We need to config this max off heap size for the RegionServer as a bit higher than the sum of this max pool size and
the off heap cache size. The TCP layer will also need to create direct bytebuffers for TCP communication. Also the DFS
client will need some off-heap to do its workings especially if short-circuit reads are configured. Allocating an extra
1 - 2 GB for the max direct memory size has worked in tests.
If you are using coprocessors and refer to the Cells in the read results, DO NOT store reference to these Cells out of
the scope of the CP hook methods. Some times the CPs want to store info about the cell (Like its row key) for considering
in the next CP hook call etc. For such cases, pls clone the required fields of the entire Cell as per the use cases.
[ See CellUtil#cloneXXX(Cell) APIs ]
[[regionserver.read.hdfs.block.offheap]]
== Read block from HDFS to offheap directly
In HBase-2.x, the RegionServer will read blocks from HDFS to a temporary onheap ByteBuffer and then flush to
the BucketCache. Even if the BucketCache is offheap, we will first pull the HDFS read onheap before writing
it out to the offheap BucketCache. We can observe much GC pressure when cache hit ratio low (e.g. a cacheHitRatio ~ 60% ).
link:https://issues.apache.org/jira/browse/HBASE-21879[HBASE-21879] addresses this issue (Requires hbase-2.3.x/hbase-3.x).
It depends on there being a supporting HDFS being in place (hadoop-2.10.x or hadoop-3.3.x) and it may require patching
HBase itself (as of this writing); see
link:https://issues.apache.org/jira/browse/HBASE-21879[HBASE-21879 Read HFile's block to ByteBuffer directly instead of to byte for reducing young gc purpose].
Appropriately setup, reads from HDFS can be into offheap buffers passed offheap to the offheap BlockCache to cache.
For more details about the design and performance improvement, please see the
link:https://docs.google.com/document/d/1xSy9axGxafoH-Qc17zbD2Bd--rWjjI00xTWQZ8ZwI_E[Design Doc -Read HFile's block to Offheap].
Here we will share some best practice about the performance tuning but first we introduce new (hbase-3.x/hbase-2.3.x) configuration names
that go with the new internal pool implementation (`ByteBuffAllocator` vs the old `ByteBufferPool`), some of which mimic now deprecated
hbase-2.2.x configurations discussed above in the <<regionserver.offheap.rpc.bb.tuning>>. Much of the advice here overlaps that given above
in the <<regionserver.offheap.rpc.bb.tuning>> since the implementations have similar configurations.
1. `hbase.server.allocator.pool.enabled` is for whether the RegionServer will use the pooled offheap ByteBuffer allocator. Default
value is true. In hbase-2.x, the deprecated `hbase.ipc.server.reservoir.enabled` did similar and is mapped to this config
until support for the old configuration is removed. This new name will be used in hbase-3.x and hbase-2.3.x+.
2. `hbase.server.allocator.minimal.allocate.size` is the threshold at which we start allocating from the pool. Otherwise the
request will be allocated from onheap directly because it would be wasteful allocating small stuff from our pool of fixed-size
ByteBuffers. The default minimum is `hbase.server.allocator.buffer.size/6`.
3. `hbase.server.allocator.max.buffer.count`: The `ByteBuffAllocator`, the new pool/reservoir implementation, has fixed-size
ByteBuffers. This config is for how many buffers to pool. Its default value is 2MB * 2 * hbase.regionserver.handler.count / 65KB
(similar to thediscussion above in <<regionserver.offheap.rpc.bb.tuning>>). If the default `hbase.regionserver.handler.count` is 30, then the default will be 1890.
4. `hbase.server.allocator.buffer.size`: The byte size of each ByteBuffer. The default value is 66560 (65KB), here we choose 65KB instead of 64KB
because of link:https://issues.apache.org/jira/browse/HBASE-22532[HBASE-22532].
The three config keys -- `hbase.ipc.server.reservoir.enabled`, `hbase.ipc.server.reservoir.initial.buffer.size` and `hbase.ipc.server.reservoir.initial.max` -- introduced in hbase-2.x
have been renamed and deprecated in hbase-3.x/hbase-2.3.x. Please use the new config keys instead:
`hbase.server.allocator.pool.enabled`, `hbase.server.allocator.buffer.size` and `hbase.server.allocator.max.buffer.count`.
Next, we have some suggestions regards performance.
.Please make sure that there are enough pooled DirectByteBuffer in your ByteBuffAllocator.
The ByteBuffAllocator will allocate ByteBuffer from the DirectByteBuffer pool first. If
theres no available ByteBuffer in the pool, then we will allocate the ByteBuffers from onheap.
By default, we will pre-allocate 4MB for each RPC handler (The handler count is determined by the config:
`hbase.regionserver.handler.count`, it has the default value 30) . Thats to say, if your `hbase.server.allocator.buffer.size`
is 65KB, then your pool will have 2MB * 2 / 65KB * 30 = 945 DirectByteBuffer. If you have a large scan and a big cache,
you may have a RPC response whose bytes size is greater than 2MB (another 2MB for receiving rpc request), then it will
be better to increase the `hbase.server.allocator.max.buffer.count`.
The RegionServer web UI has statistics on ByteBuffAllocator:
image::bytebuff-allocator-stats.png[]
If the following condition is met, you may need to increase your max buffer.count:
heapAllocationRatio >= hbase.server.allocator.minimal.allocate.size / hbase.server.allocator.buffer.size * 100%
.Please make sure the buffer size is greater than your block size.
We have the default block size of 64KB, so almost all of the data blocks will be 64KB + a small delta, where the delta is
very small, depending on the size of the last Cell. If we set `hbase.server.allocator.buffer.size`=64KB,
then each block will be allocated as two ByteBuffers: one 64KB DirectByteBuffer and one HeapByteBuffer for the delta bytes.
Ideally, we should let the data block to be allocated as one ByteBuffer; it has a simpler data structure, faster access speed,
and less heap usage. Also, if the blocks are a composite of multiple ByteBuffers, to validate the checksum
we have to perform a temporary heap copy (see link:https://issues.apache.org/jira/browse/HBASE-21917[HBASE-21917])
whereas if its a single ByteBuffer we can speed the checksum by calling the hadoop' checksum native lib; it's more faster.
Please also see: link:https://issues.apache.org/jira/browse/HBASE-22483[HBASE-22483]
Don't forget to up your _HBASE_OFFHEAPSIZE_ accordingly. See <<hbase.offheapsize>>
[[regionserver.offheap.writepath]]
== Offheap write-path
In hbase-2.x, link:https://issues.apache.org/jira/browse/HBASE-15179[HBASE-15179] made the HBase write path work off-heap. By default, the MemStores in
HBase have always used MemStore Local Allocation Buffers (MSLABs) to avoid memory fragmentation; an MSLAB creates bigger fixed sized chunks and then the
MemStores Cell's data gets copied into these MSLAB chunks. These chunks can be pooled also and from hbase-2.x on, the MSLAB pool is by default ON.
Write off-heaping makes use of the MSLAB pool. It creates MSLAB chunks as Direct ByteBuffers and pools them.
`hbase.regionserver.offheap.global.memstore.size` is the configuration key which controls the amount of off-heap data. Its value is the number of megabytes
of off-heap memory that should be used by MSLAB (e.g. `25` would result in 25MB of off-heap). Be sure to increase _HBASE_OFFHEAPSIZE_ which will set the JVM's
MaxDirectMemorySize property (see <<hbase.offheapsize>> for more on _HBASE_OFFHEAPSIZE_). The default value of
`hbase.regionserver.offheap.global.memstore.size` is 0 which means MSLAB uses onheap, not offheap, chunks by default.
`hbase.hregion.memstore.mslab.chunksize` controls the size of each off-heap chunk. Default is `2097152` (2MB).
When a Cell is added to a MemStore, the bytes for that Cell are copied into these off-heap buffers (if `hbase.regionserver.offheap.global.memstore.size` is non-zero)
and a Cell POJO will refer to this memory area. This can greatly reduce the on-heap occupancy of the MemStores and reduce the total heap utilization for RegionServers
in a write-heavy workload. On-heap and off-heap memory utiliazation are tracked at multiple levels to implement low level and high level memory management.
The decision to flush a MemStore considers both the on-heap and off-heap usage of that MemStore. At the Region level, we sum the on-heap and off-heap usages and
compare them against the region flush size (128MB, by default). Globally, on-heap size occupancy of all memstores are tracked as well as off-heap size. When any of
these sizes breache the lower mark (`hbase.regionserver.global.memstore.size.lower.limit`) or the maximum size `hbase.regionserver.global.memstore.size`), all
regions are selected for forced flushes.

File diff suppressed because it is too large Load Diff

View File

@ -1,42 +0,0 @@
////
/**
*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
////
[appendix]
[[orca]]
== Apache HBase Orca
:doctype: book
:numbered:
:toc: left
:icons: font
:experimental:
.Apache HBase Orca, HBase Colors, & Font
image::jumping-orca_rotated_25percent.png[]
link:https://issues.apache.org/jira/browse/HBASE-4920[An Orca is the Apache HBase mascot.] See NOTICES.txt.
Our Orca logo we got here: http://www.vectorfree.com/jumping-orca It is licensed Creative Commons Attribution 3.0.
See https://creativecommons.org/licenses/by/3.0/us/ We changed the logo by stripping the colored background, inverting it and then rotating it some.
The 'official' HBase color is "International Orange (Engineering)", the color of the link:https://en.wikipedia.org/wiki/International_orange[Golden Gate bridge] in San Francisco and for space suits used by NASA.
Our 'font' is link:http://www.dafont.com/bitsumishi.font[Bitsumishi].
:numbered:

View File

@ -1,76 +0,0 @@
////
/**
*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
////
[appendix]
[[other.info]]
== Other Information About HBase
:doctype: book
:numbered:
:toc: left
:icons: font
:experimental:
[[other.info.videos]]
=== HBase Videos
.Introduction to HBase
* link:https://vimeo.com/23400732[Introduction to HBase] by Todd Lipcon (Chicago Data Summit 2011).
* link:https://vimeo.com/26804675[Building Real Time Services at Facebook with HBase] by Jonathan Gray (Berlin buzzwords 2011)
* link:http://www.cloudera.com/videos/hw10_video_how_stumbleupon_built_and_advertising_platform_using_hbase_and_hadoop[The Multiple Uses Of HBase] by Jean-Daniel Cryans(Berlin buzzwords 2011).
[[other.info.pres]]
=== HBase Presentations (Slides)
link:https://www.slideshare.net/cloudera/hadoop-world-2011-advanced-hbase-schema-design-lars-george-cloudera[Advanced HBase Schema Design] by Lars George (Hadoop World 2011).
link:http://www.slideshare.net/cloudera/chicago-data-summit-apache-hbase-an-introduction[Introduction to HBase] by Todd Lipcon (Chicago Data Summit 2011).
link:http://www.slideshare.net/cloudera/hw09-practical-h-base-getting-the-most-from-your-h-base-install[Getting The Most From Your HBase Install] by Ryan Rawson, Jonathan Gray (Hadoop World 2009).
[[other.info.papers]]
=== HBase Papers
link:http://research.google.com/archive/bigtable.html[BigTable] by Google (2006).
link:http://www.larsgeorge.com/2010/05/hbase-file-locality-in-hdfs.html[HBase and HDFS Locality] by Lars George (2010).
link:http://ianvarley.com/UT/MR/Varley_MastersReport_Full_2009-08-07.pdf[No Relation: The Mixed Blessings of Non-Relational Databases] by Ian Varley (2009).
[[other.info.sites]]
=== HBase Sites
link:https://blog.cloudera.com/blog/category/hbase/[Cloudera's HBase Blog] has a lot of links to useful HBase information.
link:https://blog.cloudera.com/blog/2010/04/cap-confusion-problems-with-partition-tolerance/[CAP Confusion] is a relevant entry for background information on distributed storage systems.
link:http://refcardz.dzone.com/refcardz/hbase[HBase RefCard] from DZone.
[[other.info.books]]
=== HBase Books
link:http://shop.oreilly.com/product/0636920014348.do[HBase: The Definitive Guide] by Lars George.
[[other.info.books.hadoop]]
=== Hadoop Books
link:http://shop.oreilly.com/product/9780596521981.do[Hadoop: The Definitive Guide] by Tom White.
:numbered:

View File

@ -1,933 +0,0 @@
////
/**
*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
////
[[performance]]
= Apache HBase Performance Tuning
:doctype: book
:numbered:
:toc: left
:icons: font
:experimental:
[[perf.os]]
== Operating System
[[perf.os.ram]]
=== Memory
RAM, RAM, RAM.
Don't starve HBase.
[[perf.os.64]]
=== 64-bit
Use a 64-bit platform (and 64-bit JVM).
[[perf.os.swap]]
=== Swapping
Watch out for swapping.
Set `swappiness` to 0.
[[perf.os.cpu]]
=== CPU
Make sure you have set up your Hadoop to use native, hardware checksumming.
See link:[hadoop.native.lib].
[[perf.network]]
== Network
Perhaps the most important factor in avoiding network issues degrading Hadoop and HBase performance is the switching hardware that is used, decisions made early in the scope of the project can cause major problems when you double or triple the size of your cluster (or more).
Important items to consider:
* Switching capacity of the device
* Number of systems connected
* Uplink capacity
[[perf.network.1switch]]
=== Single Switch
The single most important factor in this configuration is that the switching capacity of the hardware is capable of handling the traffic which can be generated by all systems connected to the switch.
Some lower priced commodity hardware can have a slower switching capacity than could be utilized by a full switch.
[[perf.network.2switch]]
=== Multiple Switches
Multiple switches are a potential pitfall in the architecture.
The most common configuration of lower priced hardware is a simple 1Gbps uplink from one switch to another.
This often overlooked pinch point can easily become a bottleneck for cluster communication.
Especially with MapReduce jobs that are both reading and writing a lot of data the communication across this uplink could be saturated.
Mitigation of this issue is fairly simple and can be accomplished in multiple ways:
* Use appropriate hardware for the scale of the cluster which you're attempting to build.
* Use larger single switch configurations i.e.
single 48 port as opposed to 2x 24 port
* Configure port trunking for uplinks to utilize multiple interfaces to increase cross switch bandwidth.
[[perf.network.multirack]]
=== Multiple Racks
Multiple rack configurations carry the same potential issues as multiple switches, and can suffer performance degradation from two main areas:
* Poor switch capacity performance
* Insufficient uplink to another rack
If the switches in your rack have appropriate switching capacity to handle all the hosts at full speed, the next most likely issue will be caused by homing more of your cluster across racks.
The easiest way to avoid issues when spanning multiple racks is to use port trunking to create a bonded uplink to other racks.
The downside of this method however, is in the overhead of ports that could potentially be used.
An example of this is, creating an 8Gbps port channel from rack A to rack B, using 8 of your 24 ports to communicate between racks gives you a poor ROI, using too few however can mean you're not getting the most out of your cluster.
Using 10Gbe links between racks will greatly increase performance, and assuming your switches support a 10Gbe uplink or allow for an expansion card will allow you to save your ports for machines as opposed to uplinks.
[[perf.network.ints]]
=== Network Interfaces
Are all the network interfaces functioning correctly? Are you sure? See the Troubleshooting Case Study in <<casestudies.slownode>>.
[[perf.network.call_me_maybe]]
=== Network Consistency and Partition Tolerance
The link:http://en.wikipedia.org/wiki/CAP_theorem[CAP Theorem] states that a distributed system can maintain two out of the following three characteristics:
- *C*onsistency -- all nodes see the same data.
- *A*vailability -- every request receives a response about whether it succeeded or failed.
- *P*artition tolerance -- the system continues to operate even if some of its components become unavailable to the others.
HBase favors consistency and partition tolerance, where a decision has to be made. Coda Hale explains why partition tolerance is so important, in http://codahale.com/you-cant-sacrifice-partition-tolerance/.
Robert Yokota used an automated testing framework called link:https://aphyr.com/tags/jepsen[Jepson] to test HBase's partition tolerance in the face of network partitions, using techniques modeled after Aphyr's link:https://aphyr.com/posts/281-call-me-maybe-carly-rae-jepsen-and-the-perils-of-network-partitions[Call Me Maybe] series. The results, available as a link:https://rayokota.wordpress.com/2015/09/30/call-me-maybe-hbase/[blog post] and an link:https://rayokota.wordpress.com/2015/09/30/call-me-maybe-hbase-addendum/[addendum], show that HBase performs correctly.
[[jvm]]
== Java
[[gc]]
=== The Garbage Collector and Apache HBase
[[gcpause]]
==== Long GC pauses
In his presentation, link:http://www.slideshare.net/cloudera/hbase-hug-presentation[Avoiding Full GCs with MemStore-Local Allocation Buffers], Todd Lipcon describes two cases of stop-the-world garbage collections common in HBase, especially during loading; CMS failure modes and old generation heap fragmentation brought.
To address the first, start the CMS earlier than default by adding `-XX:CMSInitiatingOccupancyFraction` and setting it down from defaults.
Start at 60 or 70 percent (The lower you bring down the threshold, the more GCing is done, the more CPU used). To address the second fragmentation issue, Todd added an experimental facility,
(MSLAB), that must be explicitly enabled in Apache HBase 0.90.x (It's defaulted to be _on_ in Apache 0.92.x HBase). Set `hbase.hregion.memstore.mslab.enabled` to true in your `Configuration`.
See the cited slides for background and detail.
The latest JVMs do better regards fragmentation so make sure you are running a recent release.
Read down in the message, link:http://osdir.com/ml/hotspot-gc-use/2011-11/msg00002.html[Identifying concurrent mode failures caused by fragmentation].
Be aware that when enabled, each MemStore instance will occupy at least an MSLAB instance of memory.
If you have thousands of regions or lots of regions each with many column families, this allocation of MSLAB may be responsible for a good portion of your heap allocation and in an extreme case cause you to OOME.
Disable MSLAB in this case, or lower the amount of memory it uses or float less regions per server.
If you have a write-heavy workload, check out link:https://issues.apache.org/jira/browse/HBASE-8163[HBASE-8163 MemStoreChunkPool: An improvement for JAVA GC when using MSLAB].
It describes configurations to lower the amount of young GC during write-heavy loadings.
If you do not have HBASE-8163 installed, and you are trying to improve your young GC times, one trick to consider -- courtesy of our Liang Xie -- is to set the GC config `-XX:PretenureSizeThreshold` in _hbase-env.sh_ to be just smaller than the size of `hbase.hregion.memstore.mslab.chunksize` so MSLAB allocations happen in the tenured space directly rather than first in the young gen.
You'd do this because these MSLAB allocations are going to likely make it to the old gen anyways and rather than pay the price of a copies between s0 and s1 in eden space followed by the copy up from young to old gen after the MSLABs have achieved sufficient tenure, save a bit of YGC churn and allocate in the old gen directly.
Other sources of long GCs can be the JVM itself logging.
See link:https://engineering.linkedin.com/blog/2016/02/eliminating-large-jvm-gc-pauses-caused-by-background-io-traffic[Eliminating Large JVM GC Pauses Caused by Background IO Traffic]
For more information about GC logs, see <<trouble.log.gc>>.
Consider also enabling the off-heap Block Cache.
This has been shown to mitigate GC pause times.
See <<block.cache>>
[[perf.configurations]]
== HBase Configurations
See <<recommended_configurations>>.
[[perf.99th.percentile]]
=== Improving the 99th Percentile
Try link:[hedged_reads].
[[perf.compactions.and.splits]]
=== Managing Compactions
For larger systems, managing link:[compactions and splits] may be something you want to consider.
[[perf.handlers]]
=== `hbase.regionserver.handler.count`
See <<hbase.regionserver.handler.count>>.
[[perf.hfile.block.cache.size]]
=== `hfile.block.cache.size`
See <<hfile.block.cache.size>>.
A memory setting for the RegionServer process.
[[blockcache.prefetch]]
=== Prefetch Option for Blockcache
link:https://issues.apache.org/jira/browse/HBASE-9857[HBASE-9857] adds a new option to prefetch HFile contents when opening the BlockCache, if a Column family or RegionServer property is set.
This option is available for HBase 0.98.3 and later.
The purpose is to warm the BlockCache as rapidly as possible after the cache is opened, using in-memory table data, and not counting the prefetching as cache misses.
This is great for fast reads, but is not a good idea if the data to be preloaded will not fit into the BlockCache.
It is useful for tuning the IO impact of prefetching versus the time before all data blocks are in cache.
To enable prefetching on a given column family, you can use HBase Shell or use the API.
.Enable Prefetch Using HBase Shell
----
hbase> create 'MyTable', { NAME => 'myCF', PREFETCH_BLOCKS_ON_OPEN => 'true' }
----
.Enable Prefetch Using the API
====
[source,java]
----
// ...
HTableDescriptor tableDesc = new HTableDescriptor("myTable");
HColumnDescriptor cfDesc = new HColumnDescriptor("myCF");
cfDesc.setPrefetchBlocksOnOpen(true);
tableDesc.addFamily(cfDesc);
// ...
----
====
See the API documentation for
link:https://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/io/hfile/CacheConfig.html[CacheConfig].
To see prefetch in operation, enable TRACE level logging on
`org.apache.hadoop.hbase.io.hfile.HFileReaderImpl` in hbase-2.0+
or on `org.apache.hadoop.hbase.io.hfile.HFileReaderV2` in earlier versions, hbase-1.x, of HBase.
[[perf.rs.memstore.size]]
=== `hbase.regionserver.global.memstore.size`
See <<hbase.regionserver.global.memstore.size>>.
This memory setting is often adjusted for the RegionServer process depending on needs.
[[perf.rs.memstore.size.lower.limit]]
=== `hbase.regionserver.global.memstore.size.lower.limit`
See <<hbase.regionserver.global.memstore.size.lower.limit>>.
This memory setting is often adjusted for the RegionServer process depending on needs.
[[perf.hstore.blockingstorefiles]]
=== `hbase.hstore.blockingStoreFiles`
See <<hbase.hstore.blockingStoreFiles>>.
If there is blocking in the RegionServer logs, increasing this can help.
[[perf.hregion.memstore.block.multiplier]]
=== `hbase.hregion.memstore.block.multiplier`
See <<hbase.hregion.memstore.block.multiplier>>.
If there is enough RAM, increasing this can help.
[[hbase.regionserver.checksum.verify.performance]]
=== `hbase.regionserver.checksum.verify`
Have HBase write the checksum into the datablock and save having to do the checksum seek whenever you read.
See <<hbase.regionserver.checksum.verify>>, <<hbase.hstore.bytes.per.checksum>> and <<hbase.hstore.checksum.algorithm>>. For more information see the release note on link:https://issues.apache.org/jira/browse/HBASE-5074[HBASE-5074 support checksums in HBase block cache].
=== Tuning `callQueue` Options
link:https://issues.apache.org/jira/browse/HBASE-11355[HBASE-11355] introduces several callQueue tuning mechanisms which can increase performance.
See the JIRA for some benchmarking information.
To increase the number of callqueues, set `hbase.ipc.server.num.callqueue` to a value greater than `1`.
To split the callqueue into separate read and write queues, set `hbase.ipc.server.callqueue.read.ratio` to a value between `0` and `1`.
This factor weights the queues toward writes (if below .5) or reads (if above .5). Another way to say this is that the factor determines what percentage of the split queues are used for reads.
The following examples illustrate some of the possibilities.
Note that you always have at least one write queue, no matter what setting you use.
* The default value of `0` does not split the queue.
* A value of `.3` uses 30% of the queues for reading and 70% for writing.
Given a value of `10` for `hbase.ipc.server.num.callqueue`, 3 queues would be used for reads and 7 for writes.
* A value of `.5` uses the same number of read queues and write queues.
Given a value of `10` for `hbase.ipc.server.num.callqueue`, 5 queues would be used for reads and 5 for writes.
* A value of `.6` uses 60% of the queues for reading and 40% for reading.
Given a value of `10` for `hbase.ipc.server.num.callqueue`, 6 queues would be used for reads and 4 for writes.
* A value of `1.0` uses one queue to process write requests, and all other queues process read requests.
A value higher than `1.0` has the same effect as a value of `1.0`.
Given a value of `10` for `hbase.ipc.server.num.callqueue`, 9 queues would be used for reads and 1 for writes.
You can also split the read queues so that separate queues are used for short reads (from Get operations) and long reads (from Scan operations), by setting the `hbase.ipc.server.callqueue.scan.ratio` option.
This option is a factor between 0 and 1, which determine the ratio of read queues used for Gets and Scans.
More queues are used for Gets if the value is below `.5` and more are used for scans if the value is above `.5`.
No matter what setting you use, at least one read queue is used for Get operations.
* A value of `0` does not split the read queue.
* A value of `.3` uses 70% of the read queues for Gets and 30% for Scans.
Given a value of `20` for `hbase.ipc.server.num.callqueue` and a value of `.5` for `hbase.ipc.server.callqueue.read.ratio`, 10 queues would be used for reads, out of those 10, 7 would be used for Gets and 3 for Scans.
* A value of `.5` uses half the read queues for Gets and half for Scans.
Given a value of `20` for `hbase.ipc.server.num.callqueue` and a value of `.5` for `hbase.ipc.server.callqueue.read.ratio`, 10 queues would be used for reads, out of those 10, 5 would be used for Gets and 5 for Scans.
* A value of `.7` uses 30% of the read queues for Gets and 70% for Scans.
Given a value of `20` for `hbase.ipc.server.num.callqueue` and a value of `.5` for `hbase.ipc.server.callqueue.read.ratio`, 10 queues would be used for reads, out of those 10, 3 would be used for Gets and 7 for Scans.
* A value of `1.0` uses all but one of the read queues for Scans.
Given a value of `20` for `hbase.ipc.server.num.callqueue` and a value of`.5` for `hbase.ipc.server.callqueue.read.ratio`, 10 queues would be used for reads, out of those 10, 1 would be used for Gets and 9 for Scans.
You can use the new option `hbase.ipc.server.callqueue.handler.factor` to programmatically tune the number of queues:
* A value of `0` uses a single shared queue between all the handlers.
* A value of `1` uses a separate queue for each handler.
* A value between `0` and `1` tunes the number of queues against the number of handlers.
For instance, a value of `.5` shares one queue between each two handlers.
+
Having more queues, such as in a situation where you have one queue per handler, reduces contention when adding a task to a queue or selecting it from a queue.
The trade-off is that if you have some queues with long-running tasks, a handler may end up waiting to execute from that queue rather than processing another queue which has waiting tasks.
For these values to take effect on a given RegionServer, the RegionServer must be restarted.
These parameters are intended for testing purposes and should be used carefully.
[[perf.zookeeper]]
== ZooKeeper
See <<zookeeper>> for information on configuring ZooKeeper, and see the part about having a dedicated disk.
[[perf.schema]]
== Schema Design
[[perf.number.of.cfs]]
=== Number of Column Families
See <<number.of.cfs>>.
[[perf.schema.keys]]
=== Key and Attribute Lengths
See <<keysize>>.
See also <<perf.compression.however>> for compression caveats.
[[schema.regionsize]]
=== Table RegionSize
The regionsize can be set on a per-table basis via `setFileSize` on link:https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/HTableDescriptor.html[HTableDescriptor] in the event where certain tables require different regionsizes than the configured default regionsize.
See <<ops.capacity.regions>> for more information.
[[schema.bloom]]
=== Bloom Filters
A Bloom filter, named for its creator, Burton Howard Bloom, is a data structure which is designed to predict whether a given element is a member of a set of data.
A positive result from a Bloom filter is not always accurate, but a negative result is guaranteed to be accurate.
Bloom filters are designed to be "accurate enough" for sets of data which are so large that conventional hashing mechanisms would be impractical.
For more information about Bloom filters in general, refer to http://en.wikipedia.org/wiki/Bloom_filter.
In terms of HBase, Bloom filters provide a lightweight in-memory structure to reduce the number of disk reads for a given Get operation (Bloom filters do not work with Scans) to only the StoreFiles likely to contain the desired Row.
The potential performance gain increases with the number of parallel reads.
The Bloom filters themselves are stored in the metadata of each HFile and never need to be updated.
When an HFile is opened because a region is deployed to a RegionServer, the Bloom filter is loaded into memory.
HBase includes some tuning mechanisms for folding the Bloom filter to reduce the size and keep the false positive rate within a desired range.
Bloom filters were introduced in link:https://issues.apache.org/jira/browse/HBASE-1200[HBASE-1200].
Since HBase 0.96, row-based Bloom filters are enabled by default.
(link:https://issues.apache.org/jira/browse/HBASE-8450[HBASE-8450])
For more information on Bloom filters in relation to HBase, see <<blooms>> for more information, or the following Quora discussion: link:http://www.quora.com/How-are-bloom-filters-used-in-HBase[How are bloom filters used in HBase?].
[[bloom.filters.when]]
==== When To Use Bloom Filters
Since HBase 0.96, row-based Bloom filters are enabled by default.
You may choose to disable them or to change some tables to use row+column Bloom filters, depending on the characteristics of your data and how it is loaded into HBase.
To determine whether Bloom filters could have a positive impact, check the value of `blockCacheHitRatio` in the RegionServer metrics.
If Bloom filters are enabled, the value of `blockCacheHitRatio` should increase, because the Bloom filter is filtering out blocks that are definitely not needed.
You can choose to enable Bloom filters for a row or for a row+column combination.
If you generally scan entire rows, the row+column combination will not provide any benefit.
A row-based Bloom filter can operate on a row+column Get, but not the other way around.
However, if you have a large number of column-level Puts, such that a row may be present in every StoreFile, a row-based filter will always return a positive result and provide no benefit.
Unless you have one column per row, row+column Bloom filters require more space, in order to store more keys.
Bloom filters work best when the size of each data entry is at least a few kilobytes in size.
Overhead will be reduced when your data is stored in a few larger StoreFiles, to avoid extra disk IO during low-level scans to find a specific row.
Bloom filters need to be rebuilt upon deletion, so may not be appropriate in environments with a large number of deletions.
==== Enabling Bloom Filters
Bloom filters are enabled on a Column Family.
You can do this by using the setBloomFilterType method of HColumnDescriptor or using the HBase API.
Valid values are `NONE`, `ROW` (default), or `ROWCOL`.
See <<bloom.filters.when>> for more information on `ROW` versus `ROWCOL`.
See also the API documentation for link:https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/HColumnDescriptor.html[HColumnDescriptor].
The following example creates a table and enables a ROWCOL Bloom filter on the `colfam1` column family.
----
hbase> create 'mytable',{NAME => 'colfam1', BLOOMFILTER => 'ROWCOL'}
----
==== Configuring Server-Wide Behavior of Bloom Filters
You can configure the following settings in the _hbase-site.xml_.
[cols="1,1,1", options="header"]
|===
| Parameter
| Default
| Description
| io.storefile.bloom.enabled
| yes
| Set to no to kill bloom filters server-wide if something goes wrong
| io.storefile.bloom.error.rate
| .01
| The average false positive rate for bloom filters. Folding is used to
maintain the false positive rate. Expressed as a decimal representation of a
percentage.
| io.storefile.bloom.max.fold
| 7
| The guaranteed maximum fold rate. Changing this setting should not be
necessary and is not recommended.
| io.storefile.bloom.max.keys
| 128000000
| For default (single-block) Bloom filters, this specifies the maximum number of keys.
| io.storefile.delete.family.bloom.enabled
| true
| Master switch to enable Delete Family Bloom filters and store them in the StoreFile.
| io.storefile.bloom.block.size
| 131072
| Target Bloom block size. Bloom filter blocks of approximately this size
are interleaved with data blocks.
| hfile.block.bloom.cacheonwrite
| false
| Enables cache-on-write for inline blocks of a compound Bloom filter.
|===
[[schema.cf.blocksize]]
=== ColumnFamily BlockSize
The blocksize can be configured for each ColumnFamily in a table, and defaults to 64k.
Larger cell values require larger blocksizes.
There is an inverse relationship between blocksize and the resulting StoreFile indexes (i.e., if the blocksize is doubled then the resulting indexes should be roughly halved).
See link:https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/HColumnDescriptor.html[HColumnDescriptor] and <<store>>for more information.
[[cf.in.memory]]
=== In-Memory ColumnFamilies
ColumnFamilies can optionally be defined as in-memory.
Data is still persisted to disk, just like any other ColumnFamily.
In-memory blocks have the highest priority in the <<block.cache>>, but it is not a guarantee that the entire table will be in memory.
See link:https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/HColumnDescriptor.html[HColumnDescriptor] for more information.
[[perf.compression]]
=== Compression
Production systems should use compression with their ColumnFamily definitions.
See <<compression>> for more information.
[[perf.compression.however]]
==== However...
Compression deflates data _on disk_.
When it's in-memory (e.g., in the MemStore) or on the wire (e.g., transferring between RegionServer and Client) it's inflated.
So while using ColumnFamily compression is a best practice, but it's not going to completely eliminate the impact of over-sized Keys, over-sized ColumnFamily names, or over-sized Column names.
See <<keysize>> on for schema design tips, and <<keyvalue>> for more information on HBase stores data internally.
[[perf.general]]
== HBase General Patterns
[[perf.general.constants]]
=== Constants
When people get started with HBase they have a tendency to write code that looks like this:
[source,java]
----
Get get = new Get(rowkey);
Result r = table.get(get);
byte[] b = r.getValue(Bytes.toBytes("cf"), Bytes.toBytes("attr")); // returns current version of value
----
But especially when inside loops (and MapReduce jobs), converting the columnFamily and column-names to byte-arrays repeatedly is surprisingly expensive.
It's better to use constants for the byte-arrays, like this:
[source,java]
----
public static final byte[] CF = "cf".getBytes();
public static final byte[] ATTR = "attr".getBytes();
...
Get get = new Get(rowkey);
Result r = table.get(get);
byte[] b = r.getValue(CF, ATTR); // returns current version of value
----
[[perf.writing]]
== Writing to HBase
[[perf.batch.loading]]
=== Batch Loading
Use the bulk load tool if you can.
See <<arch.bulk.load>>.
Otherwise, pay attention to the below.
[[precreate.regions]]
=== Table Creation: Pre-Creating Regions
Tables in HBase are initially created with one region by default.
For bulk imports, this means that all clients will write to the same region until it is large enough to split and become distributed across the cluster.
A useful pattern to speed up the bulk import process is to pre-create empty regions.
Be somewhat conservative in this, because too-many regions can actually degrade performance.
There are two different approaches to pre-creating splits using the HBase API.
The first approach is to rely on the default `Admin` strategy (which is implemented in `Bytes.split`)...
[source,java]
----
byte[] startKey = ...; // your lowest key
byte[] endKey = ...; // your highest key
int numberOfRegions = ...; // # of regions to create
admin.createTable(table, startKey, endKey, numberOfRegions);
----
And the other approach, using the HBase API, is to define the splits yourself...
[source,java]
----
byte[][] splits = ...; // create your own splits
admin.createTable(table, splits);
----
You can achieve a similar effect using the HBase Shell to create tables by specifying split options.
[source]
----
# create table with specific split points
hbase>create 't1','f1',SPLITS => ['\x10\x00', '\x20\x00', '\x30\x00', '\x40\x00']
# create table with four regions based on random bytes keys
hbase>create 't2','f1', { NUMREGIONS => 4 , SPLITALGO => 'UniformSplit' }
# create table with five regions based on hex keys
create 't3','f1', { NUMREGIONS => 5, SPLITALGO => 'HexStringSplit' }
----
See <<rowkey.regionsplits>> for issues related to understanding your keyspace and pre-creating regions.
See <<manual_region_splitting_decisions,manual region splitting decisions>> for discussion on manually pre-splitting regions.
See <<tricks.pre-split>> for more details of using the HBase Shell to pre-split tables.
[[def.log.flush]]
=== Table Creation: Deferred Log Flush
The default behavior for Puts using the Write Ahead Log (WAL) is that `WAL` edits will be written immediately.
If deferred log flush is used, WAL edits are kept in memory until the flush period.
The benefit is aggregated and asynchronous `WAL`- writes, but the potential downside is that if the RegionServer goes down the yet-to-be-flushed edits are lost.
This is safer, however, than not using WAL at all with Puts.
Deferred log flush can be configured on tables via link:https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/HTableDescriptor.html[HTableDescriptor].
The default value of `hbase.regionserver.optionallogflushinterval` is 1000ms.
[[perf.hbase.client.putwal]]
=== HBase Client: Turn off WAL on Puts
A frequent request is to disable the WAL to increase performance of Puts.
This is only appropriate for bulk loads, as it puts your data at risk by removing the protection of the WAL in the event of a region server crash.
Bulk loads can be re-run in the event of a crash, with little risk of data loss.
WARNING: If you disable the WAL for anything other than bulk loads, your data is at risk.
In general, it is best to use WAL for Puts, and where loading throughput is a concern to use bulk loading techniques instead.
For normal Puts, you are not likely to see a performance improvement which would outweigh the risk.
To disable the WAL, see <<wal.disable>>.
[[perf.hbase.client.regiongroup]]
=== HBase Client: Group Puts by RegionServer
In addition to using the writeBuffer, grouping `Put`s by RegionServer can reduce the number of client RPC calls per writeBuffer flush.
There is a utility `HTableUtil` currently on MASTER that does this, but you can either copy that or implement your own version for those still on 0.90.x or earlier.
[[perf.hbase.write.mr.reducer]]
=== MapReduce: Skip The Reducer
When writing a lot of data to an HBase table from a MR job (e.g., with link:https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/mapreduce/TableOutputFormat.html[TableOutputFormat]), and specifically where Puts are being emitted from the Mapper, skip the Reducer step.
When a Reducer step is used, all of the output (Puts) from the Mapper will get spooled to disk, then sorted/shuffled to other Reducers that will most likely be off-node.
It's far more efficient to just write directly to HBase.
For summary jobs where HBase is used as a source and a sink, then writes will be coming from the Reducer step (e.g., summarize values then write out result). This is a different processing problem than from the above case.
[[perf.one.region]]
=== Anti-Pattern: One Hot Region
If all your data is being written to one region at a time, then re-read the section on processing timeseries data.
Also, if you are pre-splitting regions and all your data is _still_ winding up in a single region even though your keys aren't monotonically increasing, confirm that your keyspace actually works with the split strategy.
There are a variety of reasons that regions may appear "well split" but won't work with your data.
As the HBase client communicates directly with the RegionServers, this can be obtained via link:https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/RegionLocator.html#getRegionLocation-byte:A-[RegionLocator.getRegionLocation].
See <<precreate.regions>>, as well as <<perf.configurations>>
[[perf.reading]]
== Reading from HBase
The mailing list can help if you are having performance issues.
[[perf.hbase.client.caching]]
=== Scan Caching
If HBase is used as an input source for a MapReduce job, for example, make sure that the input link:https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Scan.html[Scan] instance to the MapReduce job has `setCaching` set to something greater than the default (which is 1). Using the default value means that the map-task will make call back to the region-server for every record processed.
Setting this value to 500, for example, will transfer 500 rows at a time to the client to be processed.
There is a cost/benefit to have the cache value be large because it costs more in memory for both client and RegionServer, so bigger isn't always better.
[[perf.hbase.client.caching.mr]]
==== Scan Caching in MapReduce Jobs
Scan settings in MapReduce jobs deserve special attention.
Timeouts can result (e.g., UnknownScannerException) in Map tasks if it takes longer to process a batch of records before the client goes back to the RegionServer for the next set of data.
This problem can occur because there is non-trivial processing occurring per row.
If you process rows quickly, set caching higher.
If you process rows more slowly (e.g., lots of transformations per row, writes), then set caching lower.
Timeouts can also happen in a non-MapReduce use case (i.e., single threaded HBase client doing a Scan), but the processing that is often performed in MapReduce jobs tends to exacerbate this issue.
[[perf.hbase.client.selection]]
=== Scan Attribute Selection
Whenever a Scan is used to process large numbers of rows (and especially when used as a MapReduce source), be aware of which attributes are selected.
If `scan.addFamily` is called then _all_ of the attributes in the specified ColumnFamily will be returned to the client.
If only a small number of the available attributes are to be processed, then only those attributes should be specified in the input scan because attribute over-selection is a non-trivial performance penalty over large datasets.
[[perf.hbase.client.seek]]
=== Avoid scan seeks
When columns are selected explicitly with `scan.addColumn`, HBase will schedule seek operations to seek between the selected columns.
When rows have few columns and each column has only a few versions this can be inefficient.
A seek operation is generally slower if does not seek at least past 5-10 columns/versions or 512-1024 bytes.
In order to opportunistically look ahead a few columns/versions to see if the next column/version can be found that way before a seek operation is scheduled, a new attribute `Scan.HINT_LOOKAHEAD` can be set on the Scan object.
The following code instructs the RegionServer to attempt two iterations of next before a seek is scheduled:
[source,java]
----
Scan scan = new Scan();
scan.addColumn(...);
scan.setAttribute(Scan.HINT_LOOKAHEAD, Bytes.toBytes(2));
table.getScanner(scan);
----
[[perf.hbase.mr.input]]
=== MapReduce - Input Splits
For MapReduce jobs that use HBase tables as a source, if there a pattern where the "slow" map tasks seem to have the same Input Split (i.e., the RegionServer serving the data), see the Troubleshooting Case Study in <<casestudies.slownode>>.
[[perf.hbase.client.scannerclose]]
=== Close ResultScanners
This isn't so much about improving performance but rather _avoiding_ performance problems.
If you forget to close link:https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/ResultScanner.html[ResultScanners] you can cause problems on the RegionServers.
Always have ResultScanner processing enclosed in try/catch blocks.
[source,java]
----
Scan scan = new Scan();
// set attrs...
ResultScanner rs = table.getScanner(scan);
try {
for (Result r = rs.next(); r != null; r = rs.next()) {
// process result...
} finally {
rs.close(); // always close the ResultScanner!
}
table.close();
----
[[perf.hbase.client.blockcache]]
=== Block Cache
link:https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Scan.html[Scan] instances can be set to use the block cache in the RegionServer via the `setCacheBlocks` method.
For input Scans to MapReduce jobs, this should be `false`.
For frequently accessed rows, it is advisable to use the block cache.
Cache more data by moving your Block Cache off-heap.
See <<offheap.blockcache>>
[[perf.hbase.client.rowkeyonly]]
=== Optimal Loading of Row Keys
When performing a table link:https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Scan.html[scan] where only the row keys are needed (no families, qualifiers, values or timestamps), add a FilterList with a `MUST_PASS_ALL` operator to the scanner using `setFilter`.
The filter list should include both a link:https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/filter/FirstKeyOnlyFilter.html[FirstKeyOnlyFilter] and a link:https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/filter/KeyOnlyFilter.html[KeyOnlyFilter].
Using this filter combination will result in a worst case scenario of a RegionServer reading a single value from disk and minimal network traffic to the client for a single row.
[[perf.hbase.read.dist]]
=== Concurrency: Monitor Data Spread
When performing a high number of concurrent reads, monitor the data spread of the target tables.
If the target table(s) have too few regions then the reads could likely be served from too few nodes.
See <<precreate.regions>>, as well as <<perf.configurations>>
[[blooms]]
=== Bloom Filters
Enabling Bloom Filters can save your having to go to disk and can help improve read latencies.
link:http://en.wikipedia.org/wiki/Bloom_filter[Bloom filters] were developed over in link:https://issues.apache.org/jira/browse/HBASE-1200[HBase-1200 Add bloomfilters].
For description of the development process -- why static blooms rather than dynamic -- and for an overview of the unique properties that pertain to blooms in HBase, as well as possible future directions, see the _Development Process_ section of the document link:https://issues.apache.org/jira/secure/attachment/12444007/Bloom_Filters_in_HBase.pdf[BloomFilters in HBase] attached to link:https://issues.apache.org/jira/browse/HBASE-1200[HBASE-1200].
The bloom filters described here are actually version two of blooms in HBase.
In versions up to 0.19.x, HBase had a dynamic bloom option based on work done by the link:http://www.onelab.org[European Commission One-Lab Project 034819].
The core of the HBase bloom work was later pulled up into Hadoop to implement org.apache.hadoop.io.BloomMapFile.
Version 1 of HBase blooms never worked that well.
Version 2 is a rewrite from scratch though again it starts with the one-lab work.
See also <<schema.bloom>>.
[[bloom_footprint]]
==== Bloom StoreFile footprint
Bloom filters add an entry to the `StoreFile` general `FileInfo` data structure and then two extra entries to the `StoreFile` metadata section.
===== BloomFilter in the `StoreFile``FileInfo` data structure
`FileInfo` has a `BLOOM_FILTER_TYPE` entry which is set to `NONE`, `ROW` or `ROWCOL.`
===== BloomFilter entries in `StoreFile` metadata
`BLOOM_FILTER_META` holds Bloom Size, Hash Function used, etc.
It's small in size and is cached on `StoreFile.Reader` load
`BLOOM_FILTER_DATA` is the actual bloomfilter data.
Obtained on-demand.
Stored in the LRU cache, if it is enabled (It's enabled by default).
[[config.bloom]]
==== Bloom Filter Configuration
===== `io.storefile.bloom.enabled` global kill switch
`io.storefile.bloom.enabled` in `Configuration` serves as the kill switch in case something goes wrong.
Default = `true`.
===== `io.storefile.bloom.error.rate`
`io.storefile.bloom.error.rate` = average false positive rate.
Default = 1%. Decrease rate by ½ (e.g.
to .5%) == +1 bit per bloom entry.
===== `io.storefile.bloom.max.fold`
`io.storefile.bloom.max.fold` = guaranteed minimum fold rate.
Most people should leave this alone.
Default = 7, or can collapse to at least 1/128th of original size.
See the _Development Process_ section of the document link:https://issues.apache.org/jira/secure/attachment/12444007/Bloom_Filters_in_HBase.pdf[BloomFilters in HBase] for more on what this option means.
[[hedged.reads]]
=== Hedged Reads
Hedged reads are a feature of HDFS, introduced in Hadoop 2.4.0 with link:https://issues.apache.org/jira/browse/HDFS-5776[HDFS-5776].
Normally, a single thread is spawned for each read request.
However, if hedged reads are enabled, the client waits some
configurable amount of time, and if the read does not return,
the client spawns a second read request, against a different
block replica of the same data. Whichever read returns first is
used, and the other read request is discarded.
Hedged reads are "...very good at eliminating outlier datanodes, which
in turn makes them very good choice for latency sensitive setups.
But, if you are looking for maximizing throughput, hedged reads tend to
create load amplification as things get slower in general. In short,
the thing to watch out for is the non-graceful performance degradation
when you are running close a certain throughput threshold." (Quote from Ashu Pachauri in HBASE-17083).
Other concerns to keep in mind while running with hedged reads enabled
include:
* They may lead to network congestion. See link:https://issues.apache.org/jira/browse/HBASE-17083[HBASE-17083]
* Make sure you set the thread pool large enough so as blocking on the pool does not become a bottleneck (Again see link:https://issues.apache.org/jira/browse/HBASE-17083[HBASE-17083])
(From Yu Li up in HBASE-17083)
Because an HBase RegionServer is a HDFS client, you can enable hedged
reads in HBase, by adding the following properties to the RegionServer's
hbase-site.xml and tuning the values to suit your environment.
.Configuration for Hedged Reads
* `dfs.client.hedged.read.threadpool.size` - the number of threads dedicated to servicing hedged reads.
If this is set to 0 (the default), hedged reads are disabled.
* `dfs.client.hedged.read.threshold.millis` - the number of milliseconds to wait before spawning a second read thread.
.Hedged Reads Configuration Example
====
[source,xml]
----
<property>
<name>dfs.client.hedged.read.threadpool.size</name>
<value>20</value> <!-- 20 threads -->
</property>
<property>
<name>dfs.client.hedged.read.threshold.millis</name>
<value>10</value> <!-- 10 milliseconds -->
</property>
----
====
Use the following metrics to tune the settings for hedged reads on your cluster.
See <<hbase_metrics>> for more information.
.Metrics for Hedged Reads
* hedgedReadOps - the number of times hedged read threads have been triggered.
This could indicate that read requests are often slow, or that hedged reads are triggered too quickly.
* hedgeReadOpsWin - the number of times the hedged read thread was faster than the original thread.
This could indicate that a given RegionServer is having trouble servicing requests.
* hedgedReadOpsInCurThread - the number of times hedged read was rejected from executor and needed to fallback to be executed in current thread.
This could indicate that current hedged read thread pool size is not appropriate.
[[perf.deleting]]
== Deleting from HBase
[[perf.deleting.queue]]
=== Using HBase Tables as Queues
HBase tables are sometimes used as queues.
In this case, special care must be taken to regularly perform major compactions on tables used in this manner.
As is documented in <<datamodel>>, marking rows as deleted creates additional StoreFiles which then need to be processed on reads.
Tombstones only get cleaned up with major compactions.
See also <<compaction>> and link:https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Admin.html#majorCompact-org.apache.hadoop.hbase.TableName-[Admin.majorCompact].
[[perf.deleting.rpc]]
=== Delete RPC Behavior
Be aware that `Table.delete(Delete)` doesn't use the writeBuffer.
It will execute an RegionServer RPC with each invocation.
For a large number of deletes, consider `Table.delete(List)`.
See link:https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Table.html#delete-org.apache.hadoop.hbase.client.Delete-[hbase.client.Delete]
[[perf.hdfs]]
== HDFS
Because HBase runs on <<arch.hdfs>> it is important to understand how it works and how it affects HBase.
[[perf.hdfs.curr]]
=== Current Issues With Low-Latency Reads
The original use-case for HDFS was batch processing.
As such, there low-latency reads were historically not a priority.
With the increased adoption of Apache HBase this is changing, and several improvements are already in development.
See the link:https://issues.apache.org/jira/browse/HDFS-1599[Umbrella Jira Ticket for HDFS Improvements for HBase].
[[perf.hdfs.configs.localread]]
=== Leveraging local data
Since Hadoop 1.0.0 (also 0.22.1, 0.23.1, CDH3u3 and HDP 1.0) via link:https://issues.apache.org/jira/browse/HDFS-2246[HDFS-2246], it is possible for the DFSClient to take a "short circuit" and read directly from the disk instead of going through the DataNode when the data is local.
What this means for HBase is that the RegionServers can read directly off their machine's disks instead of having to open a socket to talk to the DataNode, the former being generally much faster.
See JD's link:http://files.meetup.com/1350427/hug_ebay_jdcryans.pdf[Performance Talk].
Also see link:https://lists.apache.org/thread.html/ce2ce3a3bbd20806d0c017b2e7528e78a46ccb87c063831db051949d%401347548325%40%3Cdev.hbase.apache.org%3E[HBase, mail # dev - read short circuit] thread for more discussion around short circuit reads.
To enable "short circuit" reads, it will depend on your version of Hadoop.
The original shortcircuit read patch was much improved upon in Hadoop 2 in link:https://issues.apache.org/jira/browse/HDFS-347[HDFS-347].
See http://blog.cloudera.com/blog/2013/08/how-improved-short-circuit-local-reads-bring-better-performance-and-security-to-hadoop/ for details on the difference between the old and new implementations.
See link:http://archive.cloudera.com/cdh4/cdh/4/hadoop/hadoop-project-dist/hadoop-hdfs/ShortCircuitLocalReads.html[Hadoop shortcircuit reads configuration page] for how to enable the latter, better version of shortcircuit.
For example, here is a minimal config.
enabling short-circuit reads added to _hbase-site.xml_:
[source,xml]
----
<property>
<name>dfs.client.read.shortcircuit</name>
<value>true</value>
<description>
This configuration parameter turns on short-circuit local reads.
</description>
</property>
<property>
<name>dfs.domain.socket.path</name>
<value>/home/stack/sockets/short_circuit_read_socket_PORT</value>
<description>
Optional. This is a path to a UNIX domain socket that will be used for
communication between the DataNode and local HDFS clients.
If the string "_PORT" is present in this path, it will be replaced by the
TCP port of the DataNode.
</description>
</property>
----
Be careful about permissions for the directory that hosts the shared domain socket; dfsclient will complain if open to other than the hbase user.
If you are running on an old Hadoop, one that is without link:https://issues.apache.org/jira/browse/HDFS-347[HDFS-347] but that has link:https://issues.apache.org/jira/browse/HDFS-2246[HDFS-2246], you must set two configurations.
First, the hdfs-site.xml needs to be amended.
Set the property `dfs.block.local-path-access.user` to be the _only_ user that can use the shortcut.
This has to be the user that started HBase.
Then in hbase-site.xml, set `dfs.client.read.shortcircuit` to be `true`
Services -- at least the HBase RegionServers -- will need to be restarted in order to pick up the new configurations.
.dfs.client.read.shortcircuit.buffer.size
[NOTE]
====
The default for this value is too high when running on a highly trafficked HBase.
In HBase, if this value has not been set, we set it down from the default of 1M to 128k (Since HBase 0.98.0 and 0.96.1). See link:https://issues.apache.org/jira/browse/HBASE-8143[HBASE-8143 HBase on Hadoop 2 with local short circuit reads (ssr) causes OOM]). The Hadoop DFSClient in HBase will allocate a direct byte buffer of this size for _each_ block it has open; given HBase keeps its HDFS files open all the time, this can add up quickly.
====
[[perf.hdfs.comp]]
=== Performance Comparisons of HBase vs. HDFS
A fairly common question on the dist-list is why HBase isn't as performant as HDFS files in a batch context (e.g., as a MapReduce source or sink). The short answer is that HBase is doing a lot more than HDFS (e.g., reading the KeyValues, returning the most current row or specified timestamps, etc.), and as such HBase is 4-5 times slower than HDFS in this processing context.
There is room for improvement and this gap will, over time, be reduced, but HDFS will always be faster in this use-case.
[[perf.ec2]]
== Amazon EC2
Performance questions are common on Amazon EC2 environments because it is a shared environment.
You will not see the same throughput as a dedicated server.
In terms of running tests on EC2, run them several times for the same reason (i.e., it's a shared environment and you don't know what else is happening on the server).
If you are running on EC2 and post performance questions on the dist-list, please state this fact up-front that because EC2 issues are practically a separate class of performance issues.
[[perf.hbase.mr.cluster]]
== Collocating HBase and MapReduce
It is often recommended to have different clusters for HBase and MapReduce.
A better qualification of this is: don't collocate an HBase that serves live requests with a heavy MR workload.
OLTP and OLAP-optimized systems have conflicting requirements and one will lose to the other, usually the former.
For example, short latency-sensitive disk reads will have to wait in line behind longer reads that are trying to squeeze out as much throughput as possible.
MR jobs that write to HBase will also generate flushes and compactions, which will in turn invalidate blocks in the <<block.cache>>.
If you need to process the data from your live HBase cluster in MR, you can ship the deltas with <<copy.table>> or use replication to get the new data in real time on the OLAP cluster.
In the worst case, if you really need to collocate both, set MR to use less Map and Reduce slots than you'd normally configure, possibly just one.
When HBase is used for OLAP operations, it's preferable to set it up in a hardened way like configuring the ZooKeeper session timeout higher and giving more memory to the MemStores (the argument being that the Block Cache won't be used much since the workloads are usually long scans).
[[perf.casestudy]]
== Case Studies
For Performance and Troubleshooting Case Studies, see <<casestudies>>.
ifdef::backend-docbook[]
[index]
== Index
// Generated automatically by the DocBook toolchain.
endif::backend-docbook[]

View File

@ -1,108 +0,0 @@
////
/**
*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
////
[preface]
= Preface
:doctype: article
:numbered:
:toc: left
:icons: font
:experimental:
This is the official reference guide for the link:https://hbase.apache.org/[HBase] version it ships with.
Herein you will find either the definitive documentation on an HBase topic as of its
standing when the referenced HBase version shipped, or it will point to the location
in link:https://hbase.apache.org/apidocs/index.html[Javadoc] or
link:https://issues.apache.org/jira/browse/HBASE[JIRA] where the pertinent information can be found.
.About This Guide
This reference guide is a work in progress. The source for this guide can be found in the
_src/main/asciidoc directory of the HBase source. This reference guide is marked up
using link:http://asciidoc.org/[AsciiDoc] from which the finished guide is generated as part of the
'site' build target. Run
[source,bourne]
----
mvn site
----
to generate this documentation.
Amendments and improvements to the documentation are welcomed.
Click
link:https://issues.apache.org/jira/secure/CreateIssueDetails!init.jspa?pid=12310753&issuetype=1&components=12312132&summary=SHORT+DESCRIPTION[this link]
to file a new documentation bug against Apache HBase with some values pre-selected.
.Contributing to the Documentation
For an overview of AsciiDoc and suggestions to get started contributing to the documentation,
see the <<appendix_contributing_to_documentation,relevant section later in this documentation>>.
.Heads-up if this is your first foray into the world of distributed computing...
If this is your first foray into the wonderful world of Distributed Computing, then you are in for some interesting times.
First off, distributed systems are hard; making a distributed system hum requires a disparate skillset that spans systems (hardware and software) and networking.
Your cluster's operation can hiccup because of any of a myriad set of reasons from bugs in HBase itself through misconfigurations -- misconfiguration of HBase but also operating system misconfigurations -- through to hardware problems whether it be a bug in your network card drivers or an underprovisioned RAM bus (to mention two recent examples of hardware issues that manifested as "HBase is slow"). You will also need to do a recalibration if up to this your computing has been bound to a single box.
Here is one good starting point: link:http://en.wikipedia.org/wiki/Fallacies_of_Distributed_Computing[Fallacies of Distributed Computing].
That said, you are welcome. +
It's a fun place to be. +
Yours, the HBase Community.
.Reporting Bugs
Please use link:https://issues.apache.org/jira/browse/hbase[JIRA] to report non-security-related bugs.
To protect existing HBase installations from new vulnerabilities, please *do not* use JIRA to report security-related bugs. Instead, send your report to the mailing list private@hbase.apache.org, which allows anyone to send messages, but restricts who can read them. Someone on that list will contact you to follow up on your report.
[[hbase_supported_tested_definitions]]
.Support and Testing Expectations
The phrases /supported/, /not supported/, /tested/, and /not tested/ occur several
places throughout this guide. In the interest of clarity, here is a brief explanation
of what is generally meant by these phrases, in the context of HBase.
NOTE: Commercial technical support for Apache HBase is provided by many Hadoop vendors.
This is not the sense in which the term /support/ is used in the context of the
Apache HBase project. The Apache HBase team assumes no responsibility for your
HBase clusters, your configuration, or your data.
Supported::
In the context of Apache HBase, /supported/ means that HBase is designed to work
in the way described, and deviation from the defined behavior or functionality should
be reported as a bug.
Not Supported::
In the context of Apache HBase, /not supported/ means that a use case or use pattern
is not expected to work and should be considered an antipattern. If you think this
designation should be reconsidered for a given feature or use pattern, file a JIRA
or start a discussion on one of the mailing lists.
Tested::
In the context of Apache HBase, /tested/ means that a feature is covered by unit
or integration tests, and has been proven to work as expected.
Not Tested::
In the context of Apache HBase, /not tested/ means that a feature or use pattern
may or may not work in a given way, and may or may not corrupt your data or cause
operational issues. It is an unknown, and there are no guarantees. If you can provide
proof that a feature designated as /not tested/ does work in a given way, please
submit the tests and/or the metrics so that other users can gain certainty about
such features or use patterns.
:numbered:

View File

@ -1,101 +0,0 @@
////
/**
*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
////
[[profiler]]
= Profiler Servlet
:doctype: book
:numbered:
:toc: left
:icons: font
:experimental:
== Background
HBASE-21926 introduced a new servlet that supports integrated profiling via async-profiler.
== Prerequisites
Go to https://github.com/jvm-profiling-tools/async-profiler, download a release appropriate for your platform, and install on every cluster host.
If 4.6 or later linux, be sure to set proc variables as per 'Basic Usage' section in the
<a href="https://github.com/jvm-profiling-tools/async-profiler">Async Profiler Home Page</a>
(Not doing this will draw you diagrams with no content).
Set `ASYNC_PROFILER_HOME` in the environment (put it in hbase-env.sh) to the root directory of the async-profiler install location, or pass it on the HBase daemon's command line as a system property as `-Dasync.profiler.home=/path/to/async-profiler`.
== Usage
Once the prerequisites are satisfied, access to async-profiler is available by way of the HBase UI or direct interaction with the infoserver.
Examples:
* To collect 30 second CPU profile of current process (returns FlameGraph svg)
`curl http://localhost:16030/prof`
* To collect 1 minute CPU profile of current process and output in tree format (html)
`curl http://localhost:16030/prof?output=tree&duration=60`
* To collect 30 second heap allocation profile of current process (returns FlameGraph svg)
`curl http://localhost:16030/prof?event=alloc`
* To collect lock contention profile of current process (returns FlameGraph svg)
`curl http://localhost:16030/prof?event=lock`
The following event types are supported by async-profiler. Use the 'event' parameter to specify. Default is 'cpu'. Not all operating systems will support all types.
Perf events:
* cpu
* page-faults
* context-switches
* cycles
* instructions
* cache-references
* cache-misses
* branches
* branch-misses
* bus-cycles
* L1-dcache-load-misses
* LLC-load-misses
* dTLB-load-misses
Java events:
* alloc
* lock
The following output formats are supported. Use the 'output' parameter to specify. Default is 'flamegraph'.
Output formats:
* summary: A dump of basic profiling statistics.
* traces: Call traces.
* flat: Flat profile (top N hot methods).
* collapsed: Collapsed call traces in the format used by FlameGraph script. This is a collection of call stacks, where each line is a semicolon separated list of frames followed by a counter.
* svg: FlameGraph in SVG format.
* tree: Call tree in HTML format.
* jfr: Call traces in Java Flight Recorder format.
The 'duration' parameter specifies how long to collect trace data before generating output, specified in seconds. The default is 10 seconds.
== UI
In the UI, there is a new entry 'Profiler' in the top menu that will run the default action, which is to profile the CPU usage of the local process for thirty seconds and then produce FlameGraph SVG output.
== Notes
The query parameter `pid` can be used to specify the process id of a specific process to be profiled. If this parameter is missing the local process in which the infoserver is embedded will be profiled. Profile targets that are not JVMs might work but is not specifically supported. There are security implications. Access to the infoserver should be appropriately restricted.

View File

@ -1,222 +0,0 @@
////
/**
*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
////
[[protobuf]]
= Protobuf in HBase
:doctype: book
:numbered:
:toc: left
:icons: font
:experimental:
== Protobuf
HBase uses Google's link:https://developers.google.com/protocol-buffers/[protobufs] wherever
it persists metadata -- in the tail of hfiles or Cells written by
HBase into the system hbase:meta table or when HBase writes znodes
to zookeeper, etc. -- and when it passes objects over the wire making
xref:hbase.rpc[RPCs]. HBase uses protobufs to describe the RPC
Interfaces (Services) we expose to clients, for example the `Admin` and `Client`
Interfaces that the RegionServer fields,
or specifying the arbitrary extensions added by developers via our
xref:cp[Coprocessor Endpoint] mechanism.
In this chapter we go into detail for developers who are looking to
understand better how it all works. This chapter is of particular
use to those who would amend or extend HBase functionality.
With protobuf, you describe serializations and services in a `.protos` file.
You then feed these descriptors to a protobuf tool, the `protoc` binary,
to generate classes that can marshall and unmarshall the described serializations
and field the specified Services.
See the `README.txt` in the HBase sub-modules for details on how
to run the class generation on a per-module basis;
e.g. see `hbase-protocol/README.txt` for how to generate protobuf classes
in the hbase-protocol module.
In HBase, `.proto` files are either in the `hbase-protocol` module; a module
dedicated to hosting the common proto files and the protoc generated classes
that HBase uses internally serializing metadata. For extensions to hbase
such as REST or Coprocessor Endpoints that need their own descriptors; their
protos are located inside the function's hosting module: e.g. `hbase-rest`
is home to the REST proto files and the `hbase-rsgroup` table grouping
Coprocessor Endpoint has all protos that have to do with table grouping.
Protos are hosted by the module that makes use of them. While
this makes it so generation of protobuf classes is distributed, done
per module, we do it this way so modules encapsulate all to do with
the functionality they bring to hbase.
Extensions whether REST or Coprocessor Endpoints will make use
of core HBase protos found back in the hbase-protocol module. They'll
use these core protos when they want to serialize a Cell or a Put or
refer to a particular node via ServerName, etc., as part of providing the
CPEP Service. Going forward, after the release of hbase-2.0.0, this
practice needs to whither. We'll explain why in the later
xref:shaded.protobuf[hbase-2.0.0] section.
[[shaded.protobuf]]
=== hbase-2.0.0 and the shading of protobufs (HBASE-15638)
As of hbase-2.0.0, our protobuf usage gets a little more involved. HBase
core protobuf references are offset so as to refer to a private,
bundled protobuf. Core stops referring to protobuf
classes at com.google.protobuf.* and instead references protobuf at
the HBase-specific offset
org.apache.hadoop.hbase.shaded.com.google.protobuf.*. We do this indirection
so hbase core can evolve its protobuf version independent of whatever our
dependencies rely on. For instance, HDFS serializes using protobuf.
HDFS is on our CLASSPATH. Without the above described indirection, our
protobuf versions would have to align. HBase would be stuck
on the HDFS protobuf version until HDFS decided to upgrade. HBase
and HDFS versions would be tied.
We had to move on from protobuf-2.5.0 because we need facilities
added in protobuf-3.1.0; in particular being able to save on
copies and avoiding bringing protobufs onheap for
serialization/deserialization.
In hbase-2.0.0, we introduced a new module, `hbase-protocol-shaded`
inside which we contained all to do with protobuf and its subsequent
relocation/shading. This module is in essence a copy of much of the old
`hbase-protocol` but with an extra shading/relocation step.
Core was moved to depend on this new module.
That said, a complication arises around Coprocessor Endpoints (CPEPs).
CPEPs depend on public HBase APIs that reference protobuf classes at
`com.google.protobuf.*` explicitly. For example, in our Table Interface
we have the below as the means by which you obtain a CPEP Service
to make invocations against:
[source,java]
----
...
<T extends com.google.protobuf.Service,R> Map<byte[],R> coprocessorService(
Class<T> service, byte[] startKey, byte[] endKey,
org.apache.hadoop.hbase.client.coprocessor.Batch.Call<T,R> callable)
throws com.google.protobuf.ServiceException, Throwable
----
Existing CPEPs will have made reference to core HBase protobufs
specifying ServerNames or carrying Mutations.
So as to continue being able to service CPEPs and their references
to `com.google.protobuf.*` across the upgrade to hbase-2.0.0 and beyond,
HBase needs to be able to deal with both
`com.google.protobuf.*` references and its internal offset
`org.apache.hadoop.hbase.shaded.com.google.protobuf.*` protobufs.
The `hbase-protocol-shaded` module hosts all
protobufs used by HBase core.
But for the vestigial CPEP references to the (non-shaded) content of
`hbase-protocol`, we keep around most of this module going forward
just so it is available to CPEPs. Retaining the most of `hbase-protocol`
makes for overlapping, 'duplicated' proto instances where some exist as
non-shaded/non-relocated here in their old module
location but also in the new location, shaded under
`hbase-protocol-shaded`. In other words, there is an instance
of the generated protobuf class
`org.apache.hadoop.hbase.protobuf.generated.ServerName`
in hbase-protocol and another generated instance that is the same in all
regards except its protobuf references are to the internal shaded
version at `org.apache.hadoop.hbase.shaded.protobuf.generated.ServerName`
(note the 'shaded' addition in the middle of the package name).
If you extend a proto in `hbase-protocol-shaded` for internal use,
consider extending it also in
`hbase-protocol` (and regenerating).
Going forward, we will provide a new module of common types for use
by CPEPs that will have the same guarantees against change as does our
public API. TODO.
=== protobuf changes for hbase-3.0.0 (HBASE-23797)
Since hadoop(start from 3.3.x) also shades protobuf and bumps the version to
3.x, there is no reason for us to stay on protobuf 2.5.0 any more.
In HBase 3.0.0, the hbase-protocol module has been purged, the CPEP
implementation should use the protos in hbase-protocol-shaded module, and also
make use of the shaded protobuf in hbase-thirdparty. In general, we will keep
the protobuf version compatible for a whole major release, unless there are
critical problems, for example, a critical CVE on protobuf.
Add this dependency to your pom:
[source,xml]
----
<dependency>
<groupId>org.apache.hbase.thirdparty</groupId>
<artifactId>hbase-shaded-protobuf</artifactId>
<!-- use the version that your target hbase cluster uses -->
<version>${hbase-thirdparty.version}</version>
<scope>provided</scope>
</dependency>
----
And typically you also need to add this plugin to your pom to make your
generated protobuf code also use the shaded and relocated protobuf version
in hbase-thirdparty.
[source,xml]
----
<plugin>
<groupId>com.google.code.maven-replacer-plugin</groupId>
<artifactId>replacer</artifactId>
<version>1.5.3</version>
<executions>
<execution>
<phase>process-sources</phase>
<goals>
<goal>replace</goal>
</goals>
</execution>
</executions>
<configuration>
<basedir>${basedir}/target/generated-sources/</basedir>
<includes>
<include>**/*.java</include>
</includes>
<!-- Ignore errors when missing files, because it means this build
was run with -Dprotoc.skip and there is no -Dreplacer.skip -->
<ignoreErrors>true</ignoreErrors>
<replacements>
<replacement>
<token>([^\.])com.google.protobuf</token>
<value>$1org.apache.hbase.thirdparty.com.google.protobuf</value>
</replacement>
<replacement>
<token>(public)(\W+static)?(\W+final)?(\W+class)</token>
<value>@javax.annotation.Generated("proto") $1$2$3$4</value>
</replacement>
<!-- replacer doesn't support anchoring or negative lookbehind -->
<replacement>
<token>(@javax.annotation.Generated\("proto"\) ){2}</token>
<value>$1</value>
</replacement>
</replacements>
</configuration>
</plugin>
----
In hbase-examples module, we have some examples under the
`org.apache.hadoop.hbase.coprocessor.example` package. You can see
`BulkDeleteEndpoint` and `BulkDelete.proto` for more details, and you can also
check the `pom.xml` of hbase-examples module to see how to make use of the above
plugin.

View File

@ -1,163 +0,0 @@
////
/**
*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
////
[[pv2]]
= Procedure Framework (Pv2): link:https://issues.apache.org/jira/browse/HBASE-12439[HBASE-12439]
:doctype: book
:numbered:
:toc: left
:icons: font
:experimental:
_Procedure v2 ...aims to provide a unified way to build...multi-step procedures with a rollback/roll-forward ability in case of failure (e.g. create/delete table) -- Matteo Bertozzi, the author of Pv2._
With Pv2 you can build and run state machines. It was built by Matteo to make distributed state transitions in HBase resilient in the face of process failures. Previous to Pv2, state transition handling was spread about the codebase with implementation varying by transition-type and context. Pv2 was inspired by link:https://accumulo.apache.org/1.8/accumulo_user_manual.html#_fault_tolerant_executor_fate[FATE], of Apache Accumulo. +
Early Pv2 aspects have been shipping in HBase with a good while now but it has continued to evolve as it takes on more involved scenarios. What we have now is powerful but intricate in operation and incomplete, in need of cleanup and hardening. In this doc we have given overview on the system so you can make use of it (and help with its polishing).
This system has the awkward name of Pv2 because HBase already had the notion of a Procedure used in snapshots (see hbase-server _org.apache.hadoop.hbase.procedure_ as opposed to hbase-procedure _org.apache.hadoop.hbase.procedure2_). Pv2 supercedes and is to replace Procedure.
== Procedures
A Procedure is a transform made on an HBase entity. Examples of HBase entities would be Regions and Tables. +
Procedures are run by a ProcedureExecutor instance. Procedure current state is kept in the ProcedureStore. +
The ProcedureExecutor has but a primitive view on what goes on inside a Procedure. From its PoV, Procedures are submitted and then the ProcedureExecutor keeps calling _#execute(Object)_ until the Procedure is done. Execute may be called multiple times in the case of failure or restart, so Procedure code must be idempotent yielding the same result each time it run. Procedure code can also implement _rollback_ so steps can be undone if failure. A call to _execute()_ can result in one of following possibilities:
* _execute()_ returns
** _null_: indicates we are done.
** _this_: indicates there is more to do so, persist current procedure state and re-_execute()_.
** _Array_ of sub-procedures: indicates a set of procedures needed to be run to completion before we can proceed (after which we expect the framework to call our execute again).
* _execute()_ throws exception
** _suspend_: indicates execution of procedure is suspended and can be resumed due to some external event. The procedure state is persisted.
** _yield_: procedure is added back to scheduler. The procedure state is not persisted.
** _interrupted_: currently same as _yield_.
** Any _exception_ not listed above: Procedure _state_ is changed to _FAILED_ (after which we expect the framework will attempt rollback).
The ProcedureExecutor stamps the frameworks notions of Procedure State into the Procedure itself; e.g. it marks Procedures as INITIALIZING on submit. It moves the state to RUNNABLE when it goes to execute. When done, a Procedure gets marked FAILED or SUCCESS depending. Here is the list of all states as of this writing:
* *_INITIALIZING_* Procedure in construction, not yet added to the executor
* *_RUNNABLE_* Procedure added to the executor, and ready to be executed.
* *_WAITING_* The procedure is waiting on children (subprocedures) to be completed
* *_WAITING_TIMEOUT_* The procedure is waiting a timeout or an external event
* *_ROLLEDBACK_* The procedure failed and was rolledback.
* *_SUCCESS_* The procedure execution completed successfully.
* *_FAILED_* The procedure execution failed, may need to rollback.
After each execute, the Procedure state is persisted to the ProcedureStore. Hooks are invoked on Procedures so they can preserve custom state. Post-fault, the ProcedureExecutor re-hydrates its pre-crash state by replaying the content of the ProcedureStore. This makes the Procedure Framework resilient against process failure.
=== Implementation
In implementation, Procedures tend to divide transforms into finer-grained tasks and while some of these work items are handed off to sub-procedures,
the bulk are done as processing _steps_ in-Procedure; each invocation of the execute is used to perform a single step, and then the Procedure relinquishes returning to the framework. The Procedure does its own tracking of where it is in the processing.
What comprises a sub-task, or _step_ in the execution is up to the Procedure author but generally it is a small piece of work that cannot be further decomposed and that moves the processing forward toward its end state. Having procedures made of many small steps rather than a few large ones allows the Procedure framework give out insight on where we are in the processing. It also allows the framework be more fair in its execution. As stated per above, each step may be called multiple times (failure/restart) so steps must be implemented idempotent. +
It is easy to confuse the state that the Procedure itself is keeping with that of the Framework itself. Try to keep them distinct. +
=== Rollback
Rollback is called when the procedure or one of the sub-procedures has failed. The rollback step is supposed to cleanup the resources created during the execute() step. In case of failure and restart, rollback() may be called multiple times, so again the code must be idempotent.
=== Metrics
There are hooks for collecting metrics on submit of the procedure and on finish.
* updateMetricsOnSubmit()
* updateMetricsOnFinish()
Individual procedures can override these methods to collect procedure specific metrics. The default implementations of these methods try to get an object implementing an interface ProcedureMetrics which encapsulates following set of generic metrics:
* SubmittedCount (Counter): Total number of procedure instances submitted of a type.
* Time (Histogram): Histogram of runtime for procedure instances.
* FailedCount (Counter): Total number of failed procedure instances.
Individual procedures can implement this object and define these generic set of metrics.
=== Baggage
Procedures can carry baggage. One example is the _step_ the procedure last attained (see previous section); procedures persist the enum that marks where they are currently. Other examples might be the Region or Server name the Procedure is currently working. After each call to execute, the Procedure#serializeStateData is called. Procedures can persist whatever.
=== Result/State and Queries
(From Matteos https://issues.apache.org/jira/secure/attachment/12693273/Procedurev2Notification-Bus.pdf[ProcedureV2 and Notification Bus] doc) +
In the case of asynchronous operations, the result must be kept around until the client asks for it. Once we receive a “get” of the result we can schedule the delete of the record. For some operations the result may be “unnecessary” especially in case of failure (e.g. if the create table fail, we can query the operation result or we can just do a list table to see if it was created) so in some cases we can schedule the delete after a timeout. On the client side the operation will return a “Procedure ID”, this ID can be used to wait until the procedure is completed and get the result/exception. +
[source]
----
Admin.doOperation() { longprocId=master.doOperation(); master.waitCompletion(procId); } +
----
If the master goes down while performing the operation the backup master will pickup the half in­progress operation and complete it. The client will not notice the failure.
== Subprocedures
Subprocedures are _Procedure_ instances created and returned by _#execute(Object)_ method of a procedure instance (parent procedure). As subprocedures are of type _Procedure_, they can instantiate their own subprocedures. As its a recursive, procedure stack is maintained by the framework. The framework makes sure that the parent procedure does not proceed till all sub-procedures and their subprocedures in a procedure stack are successfully finished.
== ProcedureExecutor
_ProcedureExecutor_ uses _ProcedureStore_ and _ProcedureScheduler_ and executes procedures submitted to it. Some of the basic operations supported are:
* _abort(procId)_: aborts specified procedure if its not finished
* _submit(Procedure)_: submits procedure for execution
* _retrieve:_ list of get methods to get _Procedure_ instances and results
* _register/ unregister_ listeners: for listening on Procedure related notifications
When _ProcedureExecutor_ starts it loads procedure instances persisted in _ProcedureStore_ from previous run. All unfinished procedures are resumed from the last stored state.
== Nonces
You can pass the nonce that came in with the RPC to the Procedure on submit at the executor. This nonce will then be serialized along w/ the Procedure on persist. If a crash, on reload, the nonce will be put back into a map of nonces to pid in case a client tries to run same procedure for a second time (it will be rejected). See the base Procedure and how nonce is a base data member.
== Wait/Wake/Suspend/Yield
suspend means stop processing a procedure because we can make no more progress until a condition changes; i.e. we sent RPC and need to wait on response. The way this works is that a Procedure throws a suspend exception from down in its guts as a GOTO the end-of-the-current-processing step. Suspend also puts the Procedure back on the scheduler. Problematic is we do some accounting on our way out even on suspend making it so it can take time exiting (We have to update state in the WAL).
RegionTransitionProcedure#reportTransition is called on receipt of a report from a RS. For Assign and Unassign, this event response from the server we sent an RPC wakes up suspended Assign/Unassigns.
== Locking
Procedure Locks are not about concurrency! They are about giving a Procedure read/write access to an HBase Entity such as a Table or Region so that is possible to shut out other Procedures from making modifications to an HBase Entity state while the current one is running.
Locking is optional, up to the Procedure implementor but if an entity is being operated on by a Procedure, all transforms need to be done via Procedures using the same locking scheme else havoc.
Two ProcedureExecutor Worker threads can actually end up both processing the same Procedure instance. If it happens, the threads are meant to be running different parts of the one Procedure -- changes that do not stamp on each other (This gets awkward around the procedure frameworks notion of suspend. More on this below).
Locks optionally may be held for the life of a Procedure. For example, if moving a Region, you probably want to have exclusive access to the HBase Region until the Region completes (or fails). This is used in conjunction with {@link #holdLock(Object)}. If {@link #holdLock(Object)} returns true, the procedure executor will call acquireLock() once and thereafter not call {@link #releaseLock(Object)} until the Procedure is done (Normally, it calls release/acquire around each invocation of {@link #execute(Object)}.
Locks also may live the life of a procedure; i.e. once an Assign Procedure starts, we do not want another procedure meddling w/ the region under assignment. Procedures that hold the lock for the life of the procedure set Procedure#holdLock to true. AssignProcedure does this as do Split and Move (If in the middle of a Region move, you do not want it Splitting).
Locking can be for life of Procedure.
Some locks have a hierarchy. For example, taking a region lock also takes (read) lock on its containing table and namespace to prevent another Procedure obtaining an exclusive lock on the hosting table (or namespace).
== Procedure Types
=== StateMachineProcedure
One can consider each call to _#execute(Object)_ method as transitioning from one state to another in a state machine. Abstract class _StateMachineProcedure_ is wrapper around base _Procedure_ class which provides constructs for implementing a state machine as a _Procedure_. After each state transition current state is persisted so that, in case of crash/ restart, the state transition can be resumed from the previous state of a procedure before crash/ restart. Individual procedures need to define initial and terminus states and hooks _executeFromState()_ and _setNextState()_ are provided for state transitions.
=== RemoteProcedureDispatcher
A new RemoteProcedureDispatcher (+ subclass RSProcedureDispatcher) primitive takes care of running the Procedure-based Assignments remote component. This dispatcher knows about servers. It does aggregation of assignments by time on a time/count basis so can send procedures in batches rather than one per RPC. Procedure status comes back on the back of the RegionServer heartbeat reporting online/offline regions (No more notifications via ZK). The response is passed to the AMv2 to process. It will check against the in-memory state. If there is a mismatch, it fences out the RegionServer on the assumption that something went wrong on the RS side. Timeouts trigger retries (Not Yet Implemented!). The Procedure machine ensures only one operation at a time on any one Region/Table using entity _locking_ and smarts about what is serial and what can be run concurrently (Locking was zk-based -- youd put a znode in zk for a table -- but now has been converted to be procedure-based as part of this project).
== References
* Matteo had a slide deck on what it the Procedure Framework would look like and the problems it addresses initially link:https://issues.apache.org/jira/secure/attachment/12845124/ProcedureV2b.pdf[attached to the Pv2 issue.]
* link:https://issues.apache.org/jira/secure/attachment/12693273/Procedurev2Notification-Bus.pdf[A good doc by Matteo] on problem and how Pv2 addresses it w/ roadmap (from the Pv2 JIRA). We should go back to the roadmap to do the Notification Bus, convertion of log splitting to Pv2, etc.

View File

@ -1,221 +0,0 @@
////
/**
*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
////
[appendix]
[[hbase.rpc]]
== 0.95 RPC Specification
:doctype: book
:numbered:
:toc: left
:icons: font
:experimental:
In 0.95, all client/server communication is done with link:https://developers.google.com/protocol-buffers/[protobuf'ed] Messages rather than with link:https://hadoop.apache.org/docs/current/api/org/apache/hadoop/io/Writable.html[Hadoop
Writables].
Our RPC wire format therefore changes.
This document describes the client/server request/response protocol and our new RPC wire-format.
For what RPC is like in 0.94 and previous, see Benoît/Tsuna's link:https://github.com/OpenTSDB/asynchbase/blob/master/src/HBaseRpc.java#L164[Unofficial
Hadoop / HBase RPC protocol documentation].
For more background on how we arrived at this spec., see link:https://docs.google.com/document/d/1WCKwgaLDqBw2vpux0jPsAu2WPTRISob7HGCO8YhfDTA/edit#[HBase
RPC: WIP]
=== Goals
. A wire-format we can evolve
. A format that does not require our rewriting server core or radically changing its current architecture (for later).
=== TODO
. List of problems with currently specified format and where we would like to go in a version2, etc.
For example, what would we have to change if anything to move server async or to support streaming/chunking?
. Diagram on how it works
. A grammar that succinctly describes the wire-format.
Currently we have these words and the content of the rpc protobuf idl but a grammar for the back and forth would help with groking rpc.
Also, a little state machine on client/server interactions would help with understanding (and ensuring correct implementation).
=== RPC
The client will send setup information on connection establish.
Thereafter, the client invokes methods against the remote server sending a protobuf Message and receiving a protobuf Message in response.
Communication is synchronous.
All back and forth is preceded by an int that has the total length of the request/response.
Optionally, Cells(KeyValues) can be passed outside of protobufs in follow-behind Cell blocks
(because link:https://docs.google.com/document/d/1WEtrq-JTIUhlnlnvA0oYRLp0F8MKpEBeBSCFcQiacdw/edit#[we can't protobuf megabytes of KeyValues] or Cells). These CellBlocks are encoded and optionally compressed.
For more detail on the protobufs involved, see the
link:https://github.com/apache/hbase/blob/master/hbase-protocol/src/main/protobuf/RPC.proto[RPC.proto] file in master.
==== Connection Setup
Client initiates connection.
===== Client
On connection setup, client sends a preamble followed by a connection header.
.<preamble>
[source]
----
<MAGIC 4 byte integer> <1 byte RPC Format Version> <1 byte auth type>
----
We need the auth method spec.
here so the connection header is encoded if auth enabled.
E.g.: HBas0x000x50 -- 4 bytes of MAGIC -- `HBas' -- plus one-byte of version, 0 in this case, and one byte, 0x50 (SIMPLE). of an auth type.
.<Protobuf ConnectionHeader Message>
Has user info, and ``protocol'', as well as the encoders and compression the client will use sending CellBlocks.
CellBlock encoders and compressors are for the life of the connection.
CellBlock encoders implement org.apache.hadoop.hbase.codec.Codec.
CellBlocks may then also be compressed.
Compressors implement org.apache.hadoop.io.compress.CompressionCodec.
This protobuf is written using writeDelimited so is prefaced by a pb varint with its serialized length
===== Server
After client sends preamble and connection header, server does NOT respond if successful connection setup.
No response means server is READY to accept requests and to give out response.
If the version or authentication in the preamble is not agreeable or the server has trouble parsing the preamble, it will throw a org.apache.hadoop.hbase.ipc.FatalConnectionException explaining the error and will then disconnect.
If the client in the connection header -- i.e.
the protobuf'd Message that comes after the connection preamble -- asks for a Service the server does not support or a codec the server does not have, again we throw a FatalConnectionException with explanation.
==== Request
After a Connection has been set up, client makes requests.
Server responds.
A request is made up of a protobuf RequestHeader followed by a protobuf Message parameter.
The header includes the method name and optionally, metadata on the optional CellBlock that may be following.
The parameter type suits the method being invoked: i.e.
if we are doing a getRegionInfo request, the protobuf Message param will be an instance of GetRegionInfoRequest.
The response will be a GetRegionInfoResponse.
The CellBlock is optionally used ferrying the bulk of the RPC data: i.e. Cells/KeyValues.
===== Request Parts
.<Total Length>
The request is prefaced by an int that holds the total length of what follows.
.<Protobuf RequestHeader Message>
Will have call.id, trace.id, and method name, etc.
including optional Metadata on the Cell block IFF one is following.
Data is protobuf'd inline in this pb Message or optionally comes in the following CellBlock
.<Protobuf Param Message>
If the method being invoked is getRegionInfo, if you study the Service descriptor for the client to regionserver protocol, you will find that the request sends a GetRegionInfoRequest protobuf Message param in this position.
.<CellBlock>
An encoded and optionally compressed Cell block.
==== Response
Same as Request, it is a protobuf ResponseHeader followed by a protobuf Message response where the Message response type suits the method invoked.
Bulk of the data may come in a following CellBlock.
===== Response Parts
.<Total Length>
The response is prefaced by an int that holds the total length of what follows.
.<Protobuf ResponseHeader Message>
Will have call.id, etc.
Will include exception if failed processing.
Optionally includes metadata on optional, IFF there is a CellBlock following.
.<Protobuf Response Message>
Return or may be nothing if exception.
If the method being invoked is getRegionInfo, if you study the Service descriptor for the client to regionserver protocol, you will find that the response sends a GetRegionInfoResponse protobuf Message param in this position.
.<CellBlock>
An encoded and optionally compressed Cell block.
==== Exceptions
There are two distinct types.
There is the request failed which is encapsulated inside the response header for the response.
The connection stays open to receive new requests.
The second type, the FatalConnectionException, kills the connection.
Exceptions can carry extra information.
See the ExceptionResponse protobuf type.
It has a flag to indicate do-no-retry as well as other miscellaneous payload to help improve client responsiveness.
==== CellBlocks
These are not versioned.
Server can do the codec or it cannot.
If new version of a codec with say, tighter encoding, then give it a new class name.
Codecs will live on the server for all time so old clients can connect.
=== Notes
.Constraints
In some part, current wire-format -- i.e.
all requests and responses preceded by a length -- has been dictated by current server non-async architecture.
.One fat pb request or header+param
We went with pb header followed by pb param making a request and a pb header followed by pb response for now.
Doing header+param rather than a single protobuf Message with both header and param content:
. Is closer to what we currently have
. Having a single fat pb requires extra copying putting the already pb'd param into the body of the fat request pb (and same making result)
. We can decide whether to accept the request or not before we read the param; for example, the request might be low priority.
As is, we read header+param in one go as server is currently implemented so this is a TODO.
The advantages are minor.
If later, fat request has clear advantage, can roll out a v2 later.
[[rpc.configs]]
==== RPC Configurations
.CellBlock Codecs
To enable a codec other than the default `KeyValueCodec`, set `hbase.client.rpc.codec` to the name of the Codec class to use.
Codec must implement hbase's `Codec` Interface.
After connection setup, all passed cellblocks will be sent with this codec.
The server will return cellblocks using this same codec as long as the codec is on the servers' CLASSPATH (else you will get `UnsupportedCellCodecException`).
To change the default codec, set `hbase.client.default.rpc.codec`.
To disable cellblocks completely and to go pure protobuf, set the default to the empty String and do not specify a codec in your Configuration.
So, set `hbase.client.default.rpc.codec` to the empty string and do not set `hbase.client.rpc.codec`.
This will cause the client to connect to the server with no codec specified.
If a server sees no codec, it will return all responses in pure protobuf.
Running pure protobuf all the time will be slower than running with cellblocks.
.Compression
Uses hadoop's compression codecs.
To enable compressing of passed CellBlocks, set `hbase.client.rpc.compressor` to the name of the Compressor to use.
Compressor must implement Hadoop's CompressionCodec Interface.
After connection setup, all passed cellblocks will be sent compressed.
The server will return cellblocks compressed using this same compressor as long as the compressor is on its CLASSPATH (else you will get `UnsupportedCompressionCodecException`).
:numbered:

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@ -1,487 +0,0 @@
////
/**
*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
////
[[shell]]
= The Apache HBase Shell
:doctype: book
:numbered:
:toc: left
:icons: font
:experimental:
The Apache HBase Shell is link:http://jruby.org[(J)Ruby]'s IRB with some HBase particular commands added.
Anything you can do in IRB, you should be able to do in the HBase Shell.
To run the HBase shell, do as follows:
[source,bash]
----
$ ./bin/hbase shell
----
Type `help` and then `<RETURN>` to see a listing of shell commands and options.
Browse at least the paragraphs at the end of the help output for the gist of how variables and command arguments are entered into the HBase shell; in particular note how table names, rows, and columns, etc., must be quoted.
See <<shell_exercises,shell exercises>> for example basic shell operation.
Here is a nicely formatted listing of link:http://learnhbase.wordpress.com/2013/03/02/hbase-shell-commands/[all shell
commands] by Rajeshbabu Chintaguntla.
[[scripting]]
== Scripting with Ruby
For examples scripting Apache HBase, look in the HBase _bin_ directory.
Look at the files that end in _*.rb_.
To run one of these files, do as follows:
[source,bash]
----
$ ./bin/hbase org.jruby.Main PATH_TO_SCRIPT
----
== Running the Shell in Non-Interactive Mode
A new non-interactive mode has been added to the HBase Shell (link:https://issues.apache.org/jira/browse/HBASE-11658[HBASE-11658)].
Non-interactive mode captures the exit status (success or failure) of HBase Shell commands and passes that status back to the command interpreter.
If you use the normal interactive mode, the HBase Shell will only ever return its own exit status, which will nearly always be `0` for success.
To invoke non-interactive mode, pass the `-n` or `--non-interactive` option to HBase Shell.
[[hbase.shell.noninteractive]]
== HBase Shell in OS Scripts
You can use the HBase shell from within operating system script interpreters like the Bash shell which is the default command interpreter for most Linux and UNIX distributions.
The following guidelines use Bash syntax, but could be adjusted to work with C-style shells such as csh or tcsh, and could probably be modified to work with the Microsoft Windows script interpreter as well. Submissions are welcome.
NOTE: Spawning HBase Shell commands in this way is slow, so keep that in mind when you are deciding when combining HBase operations with the operating system command line is appropriate.
.Passing Commands to the HBase Shell
====
You can pass commands to the HBase Shell in non-interactive mode (see <<hbase.shell.noninteractive,hbase.shell.noninteractive>>) using the `echo` command and the `|` (pipe) operator.
Be sure to escape characters in the HBase commands which would otherwise be interpreted by the shell.
Some debug-level output has been truncated from the example below.
[source,bash]
----
$ echo "describe 'test1'" | ./hbase shell -n
Version 0.98.3-hadoop2, rd5e65a9144e315bb0a964e7730871af32f5018d5, Sat May 31 19:56:09 PDT 2014
describe 'test1'
DESCRIPTION ENABLED
'test1', {NAME => 'cf', DATA_BLOCK_ENCODING => 'NON true
E', BLOOMFILTER => 'ROW', REPLICATION_SCOPE => '0',
VERSIONS => '1', COMPRESSION => 'NONE', MIN_VERSIO
NS => '0', TTL => 'FOREVER', KEEP_DELETED_CELLS =>
'false', BLOCKSIZE => '65536', IN_MEMORY => 'false'
, BLOCKCACHE => 'true'}
1 row(s) in 3.2410 seconds
----
To suppress all output, echo it to _/dev/null:_
[source,bash]
----
$ echo "describe 'test'" | ./hbase shell -n > /dev/null 2>&1
----
====
.Checking the Result of a Scripted Command
====
Since scripts are not designed to be run interactively, you need a way to check whether your command failed or succeeded.
The HBase shell uses the standard convention of returning a value of `0` for successful commands, and some non-zero value for failed commands.
Bash stores a command's return value in a special environment variable called `$?`.
Because that variable is overwritten each time the shell runs any command, you should store the result in a different, script-defined variable.
This is a naive script that shows one way to store the return value and make a decision based upon it.
[source,bash]
----
#!/bin/bash
echo "describe 'test'" | ./hbase shell -n > /dev/null 2>&1
status=$?
echo "The status was " $status
if ($status == 0); then
echo "The command succeeded"
else
echo "The command may have failed."
fi
return $status
----
====
=== Checking for Success or Failure In Scripts
Getting an exit code of `0` means that the command you scripted definitely succeeded.
However, getting a non-zero exit code does not necessarily mean the command failed.
The command could have succeeded, but the client lost connectivity, or some other event obscured its success.
This is because RPC commands are stateless.
The only way to be sure of the status of an operation is to check.
For instance, if your script creates a table, but returns a non-zero exit value, you should check whether the table was actually created before trying again to create it.
== Read HBase Shell Commands from a Command File
You can enter HBase Shell commands into a text file, one command per line, and pass that file to the HBase Shell.
.Example Command File
----
create 'test', 'cf'
list 'test'
put 'test', 'row1', 'cf:a', 'value1'
put 'test', 'row2', 'cf:b', 'value2'
put 'test', 'row3', 'cf:c', 'value3'
put 'test', 'row4', 'cf:d', 'value4'
scan 'test'
get 'test', 'row1'
disable 'test'
enable 'test'
----
.Directing HBase Shell to Execute the Commands
====
Pass the path to the command file as the only argument to the `hbase shell` command.
Each command is executed and its output is shown.
If you do not include the `exit` command in your script, you are returned to the HBase shell prompt.
There is no way to programmatically check each individual command for success or failure.
Also, though you see the output for each command, the commands themselves are not echoed to the screen so it can be difficult to line up the command with its output.
[source,bash]
----
$ ./hbase shell ./sample_commands.txt
0 row(s) in 3.4170 seconds
TABLE
test
1 row(s) in 0.0590 seconds
0 row(s) in 0.1540 seconds
0 row(s) in 0.0080 seconds
0 row(s) in 0.0060 seconds
0 row(s) in 0.0060 seconds
ROW COLUMN+CELL
row1 column=cf:a, timestamp=1407130286968, value=value1
row2 column=cf:b, timestamp=1407130286997, value=value2
row3 column=cf:c, timestamp=1407130287007, value=value3
row4 column=cf:d, timestamp=1407130287015, value=value4
4 row(s) in 0.0420 seconds
COLUMN CELL
cf:a timestamp=1407130286968, value=value1
1 row(s) in 0.0110 seconds
0 row(s) in 1.5630 seconds
0 row(s) in 0.4360 seconds
----
====
== Passing VM Options to the Shell
You can pass VM options to the HBase Shell using the `HBASE_SHELL_OPTS` environment variable.
You can set this in your environment, for instance by editing _~/.bashrc_, or set it as part of the command to launch HBase Shell.
The following example sets several garbage-collection-related variables, just for the lifetime of the VM running the HBase Shell.
The command should be run all on a single line, but is broken by the `\` character, for readability.
[source,bash]
----
$ HBASE_SHELL_OPTS="-verbose:gc -XX:+PrintGCApplicationStoppedTime -XX:+PrintGCDateStamps \
-XX:+PrintGCDetails -Xloggc:$HBASE_HOME/logs/gc-hbase.log" ./bin/hbase shell
----
== Overriding configuration starting the HBase Shell
As of hbase-2.0.5/hbase-2.1.3/hbase-2.2.0/hbase-1.4.10/hbase-1.5.0, you can
pass or override hbase configuration as specified in `hbase-*.xml` by passing
your key/values prefixed with `-D` on the command-line as follows:
[source,bash]
----
$ ./bin/hbase shell -Dhbase.zookeeper.quorum=ZK0.remote.cluster.example.org,ZK1.remote.cluster.example.org,ZK2.remote.cluster.example.org -Draining=false
...
hbase(main):001:0> @shell.hbase.configuration.get("hbase.zookeeper.quorum")
=> "ZK0.remote.cluster.example.org,ZK1.remote.cluster.example.org,ZK2.remote.cluster.example.org"
hbase(main):002:0> @shell.hbase.configuration.get("raining")
=> "false"
----
== Shell Tricks
=== Table variables
HBase 0.95 adds shell commands that provides jruby-style object-oriented references for tables.
Previously all of the shell commands that act upon a table have a procedural style that always took the name of the table as an argument.
HBase 0.95 introduces the ability to assign a table to a jruby variable.
The table reference can be used to perform data read write operations such as puts, scans, and gets well as admin functionality such as disabling, dropping, describing tables.
For example, previously you would always specify a table name:
----
hbase(main):000:0> create 't', 'f'
0 row(s) in 1.0970 seconds
hbase(main):001:0> put 't', 'rold', 'f', 'v'
0 row(s) in 0.0080 seconds
hbase(main):002:0> scan 't'
ROW COLUMN+CELL
rold column=f:, timestamp=1378473207660, value=v
1 row(s) in 0.0130 seconds
hbase(main):003:0> describe 't'
DESCRIPTION ENABLED
't', {NAME => 'f', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'ROW', REPLICATION_ true
SCOPE => '0', VERSIONS => '1', COMPRESSION => 'NONE', MIN_VERSIONS => '0', TTL => '2
147483647', KEEP_DELETED_CELLS => 'false', BLOCKSIZE => '65536', IN_MEMORY => 'false
', BLOCKCACHE => 'true'}
1 row(s) in 1.4430 seconds
hbase(main):004:0> disable 't'
0 row(s) in 14.8700 seconds
hbase(main):005:0> drop 't'
0 row(s) in 23.1670 seconds
hbase(main):006:0>
----
Now you can assign the table to a variable and use the results in jruby shell code.
----
hbase(main):007 > t = create 't', 'f'
0 row(s) in 1.0970 seconds
=> Hbase::Table - t
hbase(main):008 > t.put 'r', 'f', 'v'
0 row(s) in 0.0640 seconds
hbase(main):009 > t.scan
ROW COLUMN+CELL
r column=f:, timestamp=1331865816290, value=v
1 row(s) in 0.0110 seconds
hbase(main):010:0> t.describe
DESCRIPTION ENABLED
't', {NAME => 'f', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'ROW', REPLICATION_ true
SCOPE => '0', VERSIONS => '1', COMPRESSION => 'NONE', MIN_VERSIONS => '0', TTL => '2
147483647', KEEP_DELETED_CELLS => 'false', BLOCKSIZE => '65536', IN_MEMORY => 'false
', BLOCKCACHE => 'true'}
1 row(s) in 0.0210 seconds
hbase(main):038:0> t.disable
0 row(s) in 6.2350 seconds
hbase(main):039:0> t.drop
0 row(s) in 0.2340 seconds
----
If the table has already been created, you can assign a Table to a variable by using the get_table method:
----
hbase(main):011 > create 't','f'
0 row(s) in 1.2500 seconds
=> Hbase::Table - t
hbase(main):012:0> tab = get_table 't'
0 row(s) in 0.0010 seconds
=> Hbase::Table - t
hbase(main):013:0> tab.put 'r1' ,'f', 'v'
0 row(s) in 0.0100 seconds
hbase(main):014:0> tab.scan
ROW COLUMN+CELL
r1 column=f:, timestamp=1378473876949, value=v
1 row(s) in 0.0240 seconds
hbase(main):015:0>
----
The list functionality has also been extended so that it returns a list of table names as strings.
You can then use jruby to script table operations based on these names.
The list_snapshots command also acts similarly.
----
hbase(main):016 > tables = list('t.*')
TABLE
t
1 row(s) in 0.1040 seconds
=> ["t"]
hbase(main):017:0> tables.map { |t| disable t ; drop t}
0 row(s) in 2.2510 seconds
=> [nil]
hbase(main):018:0>
----
[[irbrc]]
=== _irbrc_
Create an _.irbrc_ file for yourself in your home directory.
Add customizations.
A useful one is command history so commands are save across Shell invocations:
[source,bash]
----
$ more .irbrc
require 'irb/ext/save-history'
IRB.conf[:SAVE_HISTORY] = 100
IRB.conf[:HISTORY_FILE] = "#{ENV['HOME']}/.irb-save-history"
----
If you'd like to avoid printing the result of evaluting each expression to stderr, for example the array of tables returned from the "list" command:
[source,bash]
----
$ echo "IRB.conf[:ECHO] = false" >>~/.irbrc
----
See the `ruby` documentation of _.irbrc_ to learn about other possible configurations.
=== LOG data to timestamp
To convert the date '08/08/16 20:56:29' from an hbase log into a timestamp, do:
----
hbase(main):021:0> import java.text.SimpleDateFormat
hbase(main):022:0> import java.text.ParsePosition
hbase(main):023:0> SimpleDateFormat.new("yy/MM/dd HH:mm:ss").parse("08/08/16 20:56:29", ParsePosition.new(0)).getTime() => 1218920189000
----
To go the other direction:
----
hbase(main):021:0> import java.util.Date
hbase(main):022:0> Date.new(1218920189000).toString() => "Sat Aug 16 20:56:29 UTC 2008"
----
To output in a format that is exactly like that of the HBase log format will take a little messing with link:http://download.oracle.com/javase/6/docs/api/java/text/SimpleDateFormat.html[SimpleDateFormat].
=== Query Shell Configuration
----
hbase(main):001:0> @shell.hbase.configuration.get("hbase.rpc.timeout")
=> "60000"
----
To set a config in the shell:
----
hbase(main):005:0> @shell.hbase.configuration.setInt("hbase.rpc.timeout", 61010)
hbase(main):006:0> @shell.hbase.configuration.get("hbase.rpc.timeout")
=> "61010"
----
[[tricks.pre-split]]
=== Pre-splitting tables with the HBase Shell
You can use a variety of options to pre-split tables when creating them via the HBase Shell `create` command.
The simplest approach is to specify an array of split points when creating the table. Note that when specifying string literals as split points, these will create split points based on the underlying byte representation of the string. So when specifying a split point of '10', we are actually specifying the byte split point '\x31\30'.
The split points will define `n+1` regions where `n` is the number of split points. The lowest region will contain all keys from the lowest possible key up to but not including the first split point key.
The next region will contain keys from the first split point up to, but not including the next split point key.
This will continue for all split points up to the last. The last region will be defined from the last split point up to the maximum possible key.
[source]
----
hbase>create 't1','f',SPLITS => ['10','20','30']
----
In the above example, the table 't1' will be created with column family 'f', pre-split to four regions. Note the first region will contain all keys from '\x00' up to '\x30' (as '\x31' is the ASCII code for '1').
You can pass the split points in a file using following variation. In this example, the splits are read from a file corresponding to the local path on the local filesystem. Each line in the file specifies a split point key.
[source]
----
hbase>create 't14','f',SPLITS_FILE=>'splits.txt'
----
The other options are to automatically compute splits based on a desired number of regions and a splitting algorithm.
HBase supplies algorithms for splitting the key range based on uniform splits or based on hexadecimal keys, but you can provide your own splitting algorithm to subdivide the key range.
[source]
----
# create table with four regions based on random bytes keys
hbase>create 't2','f1', { NUMREGIONS => 4 , SPLITALGO => 'UniformSplit' }
# create table with five regions based on hex keys
hbase>create 't3','f1', { NUMREGIONS => 5, SPLITALGO => 'HexStringSplit' }
----
As the HBase Shell is effectively a Ruby environment, you can use simple Ruby scripts to compute splits algorithmically.
[source]
----
# generate splits for long (Ruby fixnum) key range from start to end key
hbase(main):070:0> def gen_splits(start_key,end_key,num_regions)
hbase(main):071:1> results=[]
hbase(main):072:1> range=end_key-start_key
hbase(main):073:1> incr=(range/num_regions).floor
hbase(main):074:1> for i in 1 .. num_regions-1
hbase(main):075:2> results.push([i*incr+start_key].pack("N"))
hbase(main):076:2> end
hbase(main):077:1> return results
hbase(main):078:1> end
hbase(main):079:0>
hbase(main):080:0> splits=gen_splits(1,2000000,10)
=> ["\000\003\r@", "\000\006\032\177", "\000\t'\276", "\000\f4\375", "\000\017B<", "\000\022O{", "\000\025\\\272", "\000\030i\371", "\000\ew8"]
hbase(main):081:0> create 'test_splits','f',SPLITS=>splits
0 row(s) in 0.2670 seconds
=> Hbase::Table - test_splits
----
Note that the HBase Shell command `truncate` effectively drops and recreates the table with default options which will discard any pre-splitting.
If you need to truncate a pre-split table, you must drop and recreate the table explicitly to re-specify custom split options.
=== Debug
==== Shell debug switch
You can set a debug switch in the shell to see more output -- e.g.
more of the stack trace on exception -- when you run a command:
[source]
----
hbase> debug <RETURN>
----
==== DEBUG log level
To enable DEBUG level logging in the shell, launch it with the `-d` option.
[source,bash]
----
$ ./bin/hbase shell -d
----
=== Commands
==== count
Count command returns the number of rows in a table.
It's quite fast when configured with the right CACHE
[source]
----
hbase> count '<tablename>', CACHE => 1000
----
The above count fetches 1000 rows at a time.
Set CACHE lower if your rows are big.
Default is to fetch one row at a time.

View File

@ -1,118 +0,0 @@
////
/**
*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
////
[[slow_log_responses_from_systable]]
==== Get Slow/Large Response Logs from System table hbase:slowlog
The above section provides details about Admin APIs:
* get_slowlog_responses
* get_largelog_responses
* clear_slowlog_responses
All of the above APIs access online in-memory ring buffers from
individual RegionServers and accumulate logs from ring buffers to display
to end user. However, since the logs are stored in memory, after RegionServer is
restarted, all the objects held in memory of that RegionServer will be cleaned up
and previous logs are lost. What if we want to persist all these logs forever?
What if we want to store them in such a manner that operator can get all historical
records with some filters? e.g get me all large/slow RPC logs that are triggered by
user1 and are related to region:
cluster_test,cccccccc,1589635796466.aa45e1571d533f5ed0bb31cdccaaf9cf. ?
If we have a system table that stores such logs in increasing (not so strictly though)
order of time, it can definitely help operators debug some historical events
(scan, get, put, compaction, flush etc) with detailed inputs.
Config which enabled system table to be created and store all log events is
`hbase.regionserver.slowlog.systable.enabled`.
The default value for this config is `false`. If provided `true`
(Note: `hbase.regionserver.slowlog.buffer.enabled` should also be `true`),
a cron job running in every RegionServer will persist the slow/large logs into
table hbase:slowlog. By default cron job runs every 10 min. Duration can be configured
with key: `hbase.slowlog.systable.chore.duration`. By default, RegionServer will
store upto 1000(config key: `hbase.regionserver.slowlog.systable.queue.size`)
slow/large logs in an internal queue and the chore will retrieve these logs
from the queue and perform batch insertion in hbase:slowlog.
hbase:slowlog has single ColumnFamily: `info`
`info` contains multiple qualifiers which are the same attributes present as
part of `get_slowlog_responses` API response.
* info:call_details
* info:client_address
* info:method_name
* info:param
* info:processing_time
* info:queue_time
* info:region_name
* info:response_size
* info:server_class
* info:start_time
* info:type
* info:username
And example of 2 rows from hbase:slowlog scan result:
[source]
----
\x024\xC1\x03\xE9\x04\xF5@ column=info:call_details, timestamp=2020-05-16T14:58:14.211Z, value=Scan(org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ScanRequest)
\x024\xC1\x03\xE9\x04\xF5@ column=info:client_address, timestamp=2020-05-16T14:58:14.211Z, value=172.20.10.2:57347
\x024\xC1\x03\xE9\x04\xF5@ column=info:method_name, timestamp=2020-05-16T14:58:14.211Z, value=Scan
\x024\xC1\x03\xE9\x04\xF5@ column=info:param, timestamp=2020-05-16T14:58:14.211Z, value=region { type: REGION_NAME value: "hbase:meta,,1" } scan { column { family: "info" } attribute { name: "_isolationle
vel_" value: "\x5C000" } start_row: "cluster_test,33333333,99999999999999" stop_row: "cluster_test,," time_range { from: 0 to: 9223372036854775807 } max_versions: 1 cache_blocks
: true max_result_size: 2097152 reversed: true caching: 10 include_stop_row: true readType: PREAD } number_of_rows: 10 close_scanner: false client_handles_partials: true client_
handles_heartbeats: true track_scan_metrics: false
\x024\xC1\x03\xE9\x04\xF5@ column=info:processing_time, timestamp=2020-05-16T14:58:14.211Z, value=18
\x024\xC1\x03\xE9\x04\xF5@ column=info:queue_time, timestamp=2020-05-16T14:58:14.211Z, value=0
\x024\xC1\x03\xE9\x04\xF5@ column=info:region_name, timestamp=2020-05-16T14:58:14.211Z, value=hbase:meta,,1
\x024\xC1\x03\xE9\x04\xF5@ column=info:response_size, timestamp=2020-05-16T14:58:14.211Z, value=1575
\x024\xC1\x03\xE9\x04\xF5@ column=info:server_class, timestamp=2020-05-16T14:58:14.211Z, value=HRegionServer
\x024\xC1\x03\xE9\x04\xF5@ column=info:start_time, timestamp=2020-05-16T14:58:14.211Z, value=1589640743732
\x024\xC1\x03\xE9\x04\xF5@ column=info:type, timestamp=2020-05-16T14:58:14.211Z, value=ALL
\x024\xC1\x03\xE9\x04\xF5@ column=info:username, timestamp=2020-05-16T14:58:14.211Z, value=user2
\x024\xC1\x06X\x81\xF6\xEC column=info:call_details, timestamp=2020-05-16T14:59:58.764Z, value=Scan(org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ScanRequest)
\x024\xC1\x06X\x81\xF6\xEC column=info:client_address, timestamp=2020-05-16T14:59:58.764Z, value=172.20.10.2:57348
\x024\xC1\x06X\x81\xF6\xEC column=info:method_name, timestamp=2020-05-16T14:59:58.764Z, value=Scan
\x024\xC1\x06X\x81\xF6\xEC column=info:param, timestamp=2020-05-16T14:59:58.764Z, value=region { type: REGION_NAME value: "cluster_test,cccccccc,1589635796466.aa45e1571d533f5ed0bb31cdccaaf9cf." } scan { a
ttribute { name: "_isolationlevel_" value: "\x5C000" } start_row: "cccccccc" time_range { from: 0 to: 9223372036854775807 } max_versions: 1 cache_blocks: true max_result_size: 2
097152 caching: 2147483647 include_stop_row: false } number_of_rows: 2147483647 close_scanner: false client_handles_partials: true client_handles_heartbeats: true track_scan_met
rics: false
\x024\xC1\x06X\x81\xF6\xEC column=info:processing_time, timestamp=2020-05-16T14:59:58.764Z, value=24
\x024\xC1\x06X\x81\xF6\xEC column=info:queue_time, timestamp=2020-05-16T14:59:58.764Z, value=0
\x024\xC1\x06X\x81\xF6\xEC column=info:region_name, timestamp=2020-05-16T14:59:58.764Z, value=cluster_test,cccccccc,1589635796466.aa45e1571d533f5ed0bb31cdccaaf9cf.
\x024\xC1\x06X\x81\xF6\xEC column=info:response_size, timestamp=2020-05-16T14:59:58.764Z, value=211227
\x024\xC1\x06X\x81\xF6\xEC column=info:server_class, timestamp=2020-05-16T14:59:58.764Z, value=HRegionServer
\x024\xC1\x06X\x81\xF6\xEC column=info:start_time, timestamp=2020-05-16T14:59:58.764Z, value=1589640743932
\x024\xC1\x06X\x81\xF6\xEC column=info:type, timestamp=2020-05-16T14:59:58.764Z, value=ALL
\x024\xC1\x06X\x81\xF6\xEC column=info:username, timestamp=2020-05-16T14:59:58.764Z, value=user1
----
Operator can use ColumnValueFilter to filter records based on region_name, username,
client_address etc.
Time range based queries will also be very useful.
Example:
[source]
----
scan 'hbase:slowlog', { TIMERANGE => [1589621394000, 1589637999999] }
----

View File

@ -1,152 +0,0 @@
////
/**
*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
////
[[snapshot_scanner]]
== Scan over snapshot
:doctype: book
:numbered:
:toc: left
:icons: font
:experimental:
:toc: left
:source-language: java
In HBase, a scan of a table costs server-side HBase resources reading, formating, and returning data back to the client.
Luckily, HBase provides a TableSnapshotScanner and TableSnapshotInputFormat (introduced by link:https://issues.apache.org/jira/browse/HBASE-8369[HBASE-8369]),
which can scan HBase-written HFiles directly in the HDFS filesystem completely by-passing hbase. This access mode
performs better than going via HBase and can be used with an offline HBase with in-place or exported
snapshot HFiles.
To read HFiles directly, the user must have sufficient permissions to access snapshots or in-place hbase HFiles.
=== TableSnapshotScanner
TableSnapshotScanner provides a means for running a single client-side scan over snapshot files.
When using TableSnapshotScanner, we must specify a temporary directory to copy the snapshot files into.
The client user should have write permissions to this directory, and the dir should not be a subdirectory of
the hbase.rootdir. The scanner deletes the contents of the directory once the scanner is closed.
.Use TableSnapshotScanner
====
[source,java]
----
Path restoreDir = new Path("XX"); // restore dir should not be a subdirectory of hbase.rootdir
Scan scan = new Scan();
try (TableSnapshotScanner scanner = new TableSnapshotScanner(conf, restoreDir, snapshotName, scan)) {
Result result = scanner.next();
while (result != null) {
...
result = scanner.next();
}
}
----
====
=== TableSnapshotInputFormat
TableSnapshotInputFormat provides a way to scan over snapshot HFiles in a MapReduce job.
.Use TableSnapshotInputFormat
====
[source,java]
----
Job job = new Job(conf);
Path restoreDir = new Path("XX"); // restore dir should not be a subdirectory of hbase.rootdir
Scan scan = new Scan();
TableMapReduceUtil.initTableSnapshotMapperJob(snapshotName, scan, MyTableMapper.class, MyMapKeyOutput.class, MyMapOutputValueWritable.class, job, true, restoreDir);
----
====
=== Permission to access snapshot and data files
Generally, only the HBase owner or the HDFS admin have the permission to access HFiles.
link:https://issues.apache.org/jira/browse/HBASE-18659[HBASE-18659] uses HDFS ACLs to make HBase granted user have permission to access snapshot files.
==== link:https://hadoop.apache.org/docs/r2.7.1/hadoop-project-dist/hadoop-hdfs/HdfsPermissionsGuide.html#ACLs_Access_Control_Lists[HDFS ACLs]
HDFS ACLs supports an "access ACL", which defines the rules to enforce during permission checks, and a "default ACL",
which defines the ACL entries that new child files or sub-directories receive automatically during creation.
Via HDFS ACLs, HBase syncs granted users with read permission to HFiles.
==== Basic idea
The HBase files are organized in the following ways:
* {hbase-rootdir}/.tmp/data/{namespace}/{table}
* {hbase-rootdir}/data/{namespace}/{table}
* {hbase-rootdir}/archive/data/{namespace}/{table}
* {hbase-rootdir}/.hbase-snapshot/{snapshotName}
So the basic idea is to add or remove HDFS ACLs to files of the global/namespace/table directory
when grant or revoke permission to global/namespace/table.
See the design doc in link:https://issues.apache.org/jira/browse/HBASE-18659[HBASE-18659] for more details.
==== Configuration to use this feature
* Firstly, make sure that HDFS ACLs are enabled and umask is set to 027
----
dfs.namenode.acls.enabled = true
fs.permissions.umask-mode = 027
----
* Add master coprocessor, please make sure the SnapshotScannerHDFSAclController is configured after AccessController
----
hbase.coprocessor.master.classes = "org.apache.hadoop.hbase.security.access.AccessController
,org.apache.hadoop.hbase.security.access.SnapshotScannerHDFSAclController"
----
* Enable this feature
----
hbase.acl.sync.to.hdfs.enable=true
----
* Modify table scheme to enable this feature for a specified table, this config is
false by default for every table, this means the HBase granted ACLs will not be synced to HDFS
----
alter 't1', CONFIGURATION => {'hbase.acl.sync.to.hdfs.enable' => 'true'}
----
==== Limitation
There are some limitations for this feature:
=====
If we enable this feature, some master operations such as grant, revoke, snapshot...
(See the design doc for more details) will be slower as we need to sync HDFS ACLs to related hfiles.
=====
=====
HDFS has a config which limits the max ACL entries num for one directory or file:
----
dfs.namenode.acls.max.entries = 32(default value)
----
The 32 entries include four fixed users for each directory or file: owner, group, other, and mask.
For a directory, the four users contain 8 ACL entries(access and default) and for a file, the four
users contain 4 ACL entries(access). This means there are 24 ACL entries left for named users or groups.
Based on this limitation, we can only sync up to 12 HBase granted users' ACLs. This means, if a table
enables this feature, then the total users with table, namespace of this table, global READ permission
should not be greater than 12.
=====
=====
There are some cases that this coprocessor has not handled or could not handle, so the user HDFS ACLs
are not synced normally. It will not make a reference link to another hfile of other tables.
=====

View File

@ -1,699 +0,0 @@
////
/**
*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
. . http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
////
[[spark]]
= HBase and Spark
:doctype: book
:numbered:
:toc: left
:icons: font
:experimental:
link:https://spark.apache.org/[Apache Spark] is a software framework that is used
to process data in memory in a distributed manner, and is replacing MapReduce in
many use cases.
Spark itself is out of scope of this document, please refer to the Spark site for
more information on the Spark project and subprojects. This document will focus
on 4 main interaction points between Spark and HBase. Those interaction points are:
Basic Spark::
The ability to have an HBase Connection at any point in your Spark DAG.
Spark Streaming::
The ability to have an HBase Connection at any point in your Spark Streaming
application.
Spark Bulk Load::
The ability to write directly to HBase HFiles for bulk insertion into HBase
SparkSQL/DataFrames::
The ability to write SparkSQL that draws on tables that are represented in HBase.
The following sections will walk through examples of all these interaction points.
== Basic Spark
This section discusses Spark HBase integration at the lowest and simplest levels.
All the other interaction points are built upon the concepts that will be described
here.
At the root of all Spark and HBase integration is the HBaseContext. The HBaseContext
takes in HBase configurations and pushes them to the Spark executors. This allows
us to have an HBase Connection per Spark Executor in a static location.
For reference, Spark Executors can be on the same nodes as the Region Servers or
on different nodes, there is no dependence on co-location. Think of every Spark
Executor as a multi-threaded client application. This allows any Spark Tasks
running on the executors to access the shared Connection object.
.HBaseContext Usage Example
====
This example shows how HBaseContext can be used to do a `foreachPartition` on a RDD
in Scala:
[source, scala]
----
val sc = new SparkContext("local", "test")
val config = new HBaseConfiguration()
...
val hbaseContext = new HBaseContext(sc, config)
rdd.hbaseForeachPartition(hbaseContext, (it, conn) => {
val bufferedMutator = conn.getBufferedMutator(TableName.valueOf("t1"))
it.foreach((putRecord) => {
. val put = new Put(putRecord._1)
. putRecord._2.foreach((putValue) => put.addColumn(putValue._1, putValue._2, putValue._3))
. bufferedMutator.mutate(put)
})
bufferedMutator.flush()
bufferedMutator.close()
})
----
Here is the same example implemented in Java:
[source, java]
----
JavaSparkContext jsc = new JavaSparkContext(sparkConf);
try {
List<byte[]> list = new ArrayList<>();
list.add(Bytes.toBytes("1"));
...
list.add(Bytes.toBytes("5"));
JavaRDD<byte[]> rdd = jsc.parallelize(list);
Configuration conf = HBaseConfiguration.create();
JavaHBaseContext hbaseContext = new JavaHBaseContext(jsc, conf);
hbaseContext.foreachPartition(rdd,
new VoidFunction<Tuple2<Iterator<byte[]>, Connection>>() {
public void call(Tuple2<Iterator<byte[]>, Connection> t)
throws Exception {
Table table = t._2().getTable(TableName.valueOf(tableName));
BufferedMutator mutator = t._2().getBufferedMutator(TableName.valueOf(tableName));
while (t._1().hasNext()) {
byte[] b = t._1().next();
Result r = table.get(new Get(b));
if (r.getExists()) {
mutator.mutate(new Put(b));
}
}
mutator.flush();
mutator.close();
table.close();
}
});
} finally {
jsc.stop();
}
----
====
All functionality between Spark and HBase will be supported both in Scala and in
Java, with the exception of SparkSQL which will support any language that is
supported by Spark. For the remaining of this documentation we will focus on
Scala examples.
The examples above illustrate how to do a foreachPartition with a connection. A
number of other Spark base functions are supported out of the box:
// tag::spark_base_functions[]
`bulkPut`:: For massively parallel sending of puts to HBase
`bulkDelete`:: For massively parallel sending of deletes to HBase
`bulkGet`:: For massively parallel sending of gets to HBase to create a new RDD
`mapPartition`:: To do a Spark Map function with a Connection object to allow full
access to HBase
`hbaseRDD`:: To simplify a distributed scan to create a RDD
// end::spark_base_functions[]
For examples of all these functionalities, see the
link:https://github.com/apache/hbase-connectors/tree/master/spark[hbase-spark integration]
in the link:https://github.com/apache/hbase-connectors[hbase-connectors] repository
(the hbase-spark connectors live outside hbase core in a related,
Apache HBase project maintained, associated repo).
== Spark Streaming
https://spark.apache.org/streaming/[Spark Streaming] is a micro batching stream
processing framework built on top of Spark. HBase and Spark Streaming make great
companions in that HBase can help serve the following benefits alongside Spark
Streaming.
* A place to grab reference data or profile data on the fly
* A place to store counts or aggregates in a way that supports Spark Streaming's
promise of _only once processing_.
The link:https://github.com/apache/hbase-connectors/tree/master/spark[hbase-spark integration]
with Spark Streaming is similar to its normal Spark integration points, in that the following
commands are possible straight off a Spark Streaming DStream.
include::spark.adoc[tags=spark_base_functions]
.`bulkPut` Example with DStreams
====
Below is an example of bulkPut with DStreams. It is very close in feel to the RDD
bulk put.
[source, scala]
----
val sc = new SparkContext("local", "test")
val config = new HBaseConfiguration()
val hbaseContext = new HBaseContext(sc, config)
val ssc = new StreamingContext(sc, Milliseconds(200))
val rdd1 = ...
val rdd2 = ...
val queue = mutable.Queue[RDD[(Array[Byte], Array[(Array[Byte],
Array[Byte], Array[Byte])])]]()
queue += rdd1
queue += rdd2
val dStream = ssc.queueStream(queue)
dStream.hbaseBulkPut(
hbaseContext,
TableName.valueOf(tableName),
(putRecord) => {
val put = new Put(putRecord._1)
putRecord._2.foreach((putValue) => put.addColumn(putValue._1, putValue._2, putValue._3))
put
})
----
There are three inputs to the `hbaseBulkPut` function.
The hbaseContext that carries the configuration broadcast information link
to the HBase Connections in the executor, the table name of the table we are
putting data into, and a function that will convert a record in the DStream
into an HBase Put object.
====
== Bulk Load
There are two options for bulk loading data into HBase with Spark. There is the
basic bulk load functionality that will work for cases where your rows have
millions of columns and cases where your columns are not consolidated and
partitioned before the map side of the Spark bulk load process.
There is also a thin record bulk load option with Spark. This second option is
designed for tables that have less then 10k columns per row. The advantage
of this second option is higher throughput and less over-all load on the Spark
shuffle operation.
Both implementations work more or less like the MapReduce bulk load process in
that a partitioner partitions the rowkeys based on region splits and
the row keys are sent to the reducers in order, so that HFiles can be written
out directly from the reduce phase.
In Spark terms, the bulk load will be implemented around a Spark
`repartitionAndSortWithinPartitions` followed by a Spark `foreachPartition`.
First lets look at an example of using the basic bulk load functionality
.Bulk Loading Example
====
The following example shows bulk loading with Spark.
[source, scala]
----
val sc = new SparkContext("local", "test")
val config = new HBaseConfiguration()
val hbaseContext = new HBaseContext(sc, config)
val stagingFolder = ...
val rdd = sc.parallelize(Array(
(Bytes.toBytes("1"),
(Bytes.toBytes(columnFamily1), Bytes.toBytes("a"), Bytes.toBytes("foo1"))),
(Bytes.toBytes("3"),
(Bytes.toBytes(columnFamily1), Bytes.toBytes("b"), Bytes.toBytes("foo2.b"))), ...
rdd.hbaseBulkLoad(TableName.valueOf(tableName),
t => {
val rowKey = t._1
val family:Array[Byte] = t._2(0)._1
val qualifier = t._2(0)._2
val value = t._2(0)._3
val keyFamilyQualifier= new KeyFamilyQualifier(rowKey, family, qualifier)
Seq((keyFamilyQualifier, value)).iterator
},
stagingFolder.getPath)
val load = new LoadIncrementalHFiles(config)
load.doBulkLoad(new Path(stagingFolder.getPath),
conn.getAdmin, table, conn.getRegionLocator(TableName.valueOf(tableName)))
----
====
The `hbaseBulkLoad` function takes three required parameters:
. The table name of the table we intend to bulk load too
. A function that will convert a record in the RDD to a tuple key value par. With
the tuple key being a KeyFamilyQualifer object and the value being the cell value.
The KeyFamilyQualifer object will hold the RowKey, Column Family, and Column Qualifier.
The shuffle will partition on the RowKey but will sort by all three values.
. The temporary path for the HFile to be written out too
Following the Spark bulk load command, use the HBase's LoadIncrementalHFiles object
to load the newly created HFiles into HBase.
.Additional Parameters for Bulk Loading with Spark
You can set the following attributes with additional parameter options on hbaseBulkLoad.
* Max file size of the HFiles
* A flag to exclude HFiles from compactions
* Column Family settings for compression, bloomType, blockSize, and dataBlockEncoding
.Using Additional Parameters
====
[source, scala]
----
val sc = new SparkContext("local", "test")
val config = new HBaseConfiguration()
val hbaseContext = new HBaseContext(sc, config)
val stagingFolder = ...
val rdd = sc.parallelize(Array(
(Bytes.toBytes("1"),
(Bytes.toBytes(columnFamily1), Bytes.toBytes("a"), Bytes.toBytes("foo1"))),
(Bytes.toBytes("3"),
(Bytes.toBytes(columnFamily1), Bytes.toBytes("b"), Bytes.toBytes("foo2.b"))), ...
val familyHBaseWriterOptions = new java.util.HashMap[Array[Byte], FamilyHFileWriteOptions]
val f1Options = new FamilyHFileWriteOptions("GZ", "ROW", 128, "PREFIX")
familyHBaseWriterOptions.put(Bytes.toBytes("columnFamily1"), f1Options)
rdd.hbaseBulkLoad(TableName.valueOf(tableName),
t => {
val rowKey = t._1
val family:Array[Byte] = t._2(0)._1
val qualifier = t._2(0)._2
val value = t._2(0)._3
val keyFamilyQualifier= new KeyFamilyQualifier(rowKey, family, qualifier)
Seq((keyFamilyQualifier, value)).iterator
},
stagingFolder.getPath,
familyHBaseWriterOptions,
compactionExclude = false,
HConstants.DEFAULT_MAX_FILE_SIZE)
val load = new LoadIncrementalHFiles(config)
load.doBulkLoad(new Path(stagingFolder.getPath),
conn.getAdmin, table, conn.getRegionLocator(TableName.valueOf(tableName)))
----
====
Now lets look at how you would call the thin record bulk load implementation
.Using thin record bulk load
====
[source, scala]
----
val sc = new SparkContext("local", "test")
val config = new HBaseConfiguration()
val hbaseContext = new HBaseContext(sc, config)
val stagingFolder = ...
val rdd = sc.parallelize(Array(
("1",
(Bytes.toBytes(columnFamily1), Bytes.toBytes("a"), Bytes.toBytes("foo1"))),
("3",
(Bytes.toBytes(columnFamily1), Bytes.toBytes("b"), Bytes.toBytes("foo2.b"))), ...
rdd.hbaseBulkLoadThinRows(hbaseContext,
TableName.valueOf(tableName),
t => {
val rowKey = t._1
val familyQualifiersValues = new FamiliesQualifiersValues
t._2.foreach(f => {
val family:Array[Byte] = f._1
val qualifier = f._2
val value:Array[Byte] = f._3
familyQualifiersValues +=(family, qualifier, value)
})
(new ByteArrayWrapper(Bytes.toBytes(rowKey)), familyQualifiersValues)
},
stagingFolder.getPath,
new java.util.HashMap[Array[Byte], FamilyHFileWriteOptions],
compactionExclude = false,
20)
val load = new LoadIncrementalHFiles(config)
load.doBulkLoad(new Path(stagingFolder.getPath),
conn.getAdmin, table, conn.getRegionLocator(TableName.valueOf(tableName)))
----
====
Note that the big difference in using bulk load for thin rows is the function
returns a tuple with the first value being the row key and the second value
being an object of FamiliesQualifiersValues, which will contain all the
values for this row for all column families.
== SparkSQL/DataFrames
The link:https://github.com/apache/hbase-connectors/tree/master/spark[hbase-spark integration]
leverages
link:https://databricks.com/blog/2015/01/09/spark-sql-data-sources-api-unified-data-access-for-the-spark-platform.html[DataSource API]
(link:https://issues.apache.org/jira/browse/SPARK-3247[SPARK-3247])
introduced in Spark-1.2.0, which bridges the gap between simple HBase KV store and complex
relational SQL queries and enables users to perform complex data analytical work
on top of HBase using Spark. HBase Dataframe is a standard Spark Dataframe, and is able to
interact with any other data sources such as Hive, Orc, Parquet, JSON, etc.
The link:https://github.com/apache/hbase-connectors/tree/master/spark[hbase-spark integration]
applies critical techniques such as partition pruning, column pruning,
predicate pushdown and data locality.
To use the
link:https://github.com/apache/hbase-connectors/tree/master/spark[hbase-spark integration]
connector, users need to define the Catalog for the schema mapping
between HBase and Spark tables, prepare the data and populate the HBase table,
then load the HBase DataFrame. After that, users can do integrated query and access records
in HBase tables with SQL query. The following illustrates the basic procedure.
=== Define catalog
[source, scala]
----
def catalog = s"""{
       |"table":{"namespace":"default", "name":"table1"},
       |"rowkey":"key",
       |"columns":{
         |"col0":{"cf":"rowkey", "col":"key", "type":"string"},
         |"col1":{"cf":"cf1", "col":"col1", "type":"boolean"},
         |"col2":{"cf":"cf2", "col":"col2", "type":"double"},
         |"col3":{"cf":"cf3", "col":"col3", "type":"float"},
         |"col4":{"cf":"cf4", "col":"col4", "type":"int"},
         |"col5":{"cf":"cf5", "col":"col5", "type":"bigint"},
         |"col6":{"cf":"cf6", "col":"col6", "type":"smallint"},
         |"col7":{"cf":"cf7", "col":"col7", "type":"string"},
         |"col8":{"cf":"cf8", "col":"col8", "type":"tinyint"}
       |}
     |}""".stripMargin
----
Catalog defines a mapping between HBase and Spark tables. There are two critical parts of this catalog.
One is the rowkey definition and the other is the mapping between table column in Spark and
the column family and column qualifier in HBase. The above defines a schema for a HBase table
with name as table1, row key as key and a number of columns (col1 `-` col8). Note that the rowkey
also has to be defined in details as a column (col0), which has a specific cf (rowkey).
=== Save the DataFrame
[source, scala]
----
case class HBaseRecord(
col0: String,
col1: Boolean,
col2: Double,
col3: Float,
col4: Int,       
col5: Long,
col6: Short,
col7: String,
col8: Byte)
object HBaseRecord
{                                                                                                             
def apply(i: Int, t: String): HBaseRecord = {
val s = s"""row${"%03d".format(i)}"""       
HBaseRecord(s,
i % 2 == 0,
i.toDouble,
i.toFloat,  
i,
i.toLong,
i.toShort,  
s"String$i: $t",      
i.toByte)
}
}
val data = (0 to 255).map { i =>  HBaseRecord(i, "extra")}
sc.parallelize(data).toDF.write.options(
 Map(HBaseTableCatalog.tableCatalog -> catalog, HBaseTableCatalog.newTable -> "5"))
 .format("org.apache.hadoop.hbase.spark ")
 .save()
----
`data` prepared by the user is a local Scala collection which has 256 HBaseRecord objects.
`sc.parallelize(data)` function distributes `data` to form an RDD. `toDF` returns a DataFrame.
`write` function returns a DataFrameWriter used to write the DataFrame to external storage
systems (e.g. HBase here). Given a DataFrame with specified schema `catalog`, `save` function
will create an HBase table with 5 regions and save the DataFrame inside.
=== Load the DataFrame
[source, scala]
----
def withCatalog(cat: String): DataFrame = {
sqlContext
.read
.options(Map(HBaseTableCatalog.tableCatalog->cat))
.format("org.apache.hadoop.hbase.spark")
.load()
}
val df = withCatalog(catalog)
----
In withCatalog function, sqlContext is a variable of SQLContext, which is the entry point
for working with structured data (rows and columns) in Spark.
`read` returns a DataFrameReader that can be used to read data in as a DataFrame.
`option` function adds input options for the underlying data source to the DataFrameReader,
and `format` function specifies the input data source format for the DataFrameReader.
The `load()` function loads input in as a DataFrame. The date frame `df` returned
by `withCatalog` function could be used to access HBase table, such as 4.4 and 4.5.
=== Language Integrated Query
[source, scala]
----
val s = df.filter(($"col0" <= "row050" && $"col0" > "row040") ||
$"col0" === "row005" ||
$"col0" <= "row005")
.select("col0", "col1", "col4")
s.show
----
DataFrame can do various operations, such as join, sort, select, filter, orderBy and so on.
`df.filter` above filters rows using the given SQL expression. `select` selects a set of columns:
`col0`, `col1` and `col4`.
=== SQL Query
[source, scala]
----
df.registerTempTable("table1")
sqlContext.sql("select count(col1) from table1").show
----
`registerTempTable` registers `df` DataFrame as a temporary table using the table name `table1`.
The lifetime of this temporary table is tied to the SQLContext that was used to create `df`.
`sqlContext.sql` function allows the user to execute SQL queries.
=== Others
.Query with different timestamps
====
In HBaseSparkConf, four parameters related to timestamp can be set. They are TIMESTAMP,
MIN_TIMESTAMP, MAX_TIMESTAMP and MAX_VERSIONS respectively. Users can query records with
different timestamps or time ranges with MIN_TIMESTAMP and MAX_TIMESTAMP. In the meantime,
use concrete value instead of tsSpecified and oldMs in the examples below.
The example below shows how to load df DataFrame with different timestamps.
tsSpecified is specified by the user.
HBaseTableCatalog defines the HBase and Relation relation schema.
writeCatalog defines catalog for the schema mapping.
[source, scala]
----
val df = sqlContext.read
.options(Map(HBaseTableCatalog.tableCatalog -> writeCatalog, HBaseSparkConf.TIMESTAMP -> tsSpecified.toString))
.format("org.apache.hadoop.hbase.spark")
.load()
----
The example below shows how to load df DataFrame with different time ranges.
oldMs is specified by the user.
[source, scala]
----
val df = sqlContext.read
.options(Map(HBaseTableCatalog.tableCatalog -> writeCatalog, HBaseSparkConf.MIN_TIMESTAMP -> "0",
HBaseSparkConf.MAX_TIMESTAMP -> oldMs.toString))
.format("org.apache.hadoop.hbase.spark")
.load()
----
After loading df DataFrame, users can query data.
[source, scala]
----
df.registerTempTable("table")
sqlContext.sql("select count(col1) from table").show
----
====
.Native Avro support
====
The link:https://github.com/apache/hbase-connectors/tree/master/spark[hbase-spark integration]
connector supports different data formats like Avro, JSON, etc. The use case below
shows how spark supports Avro. Users can persist the Avro record into HBase directly. Internally,
the Avro schema is converted to a native Spark Catalyst data type automatically.
Note that both key-value parts in an HBase table can be defined in Avro format.
1) Define catalog for the schema mapping:
[source, scala]
----
def catalog = s"""{
|"table":{"namespace":"default", "name":"Avrotable"},
|"rowkey":"key",
|"columns":{
|"col0":{"cf":"rowkey", "col":"key", "type":"string"},
|"col1":{"cf":"cf1", "col":"col1", "type":"binary"}
|}
|}""".stripMargin
----
`catalog` is a schema for a HBase table named `Avrotable`. row key as key and
one column col1. The rowkey also has to be defined in details as a column (col0),
which has a specific cf (rowkey).
2) Prepare the Data:
[source, scala]
----
object AvroHBaseRecord {
val schemaString =
s"""{"namespace": "example.avro",
| "type": "record", "name": "User",
| "fields": [
| {"name": "name", "type": "string"},
| {"name": "favorite_number", "type": ["int", "null"]},
| {"name": "favorite_color", "type": ["string", "null"]},
| {"name": "favorite_array", "type": {"type": "array", "items": "string"}},
| {"name": "favorite_map", "type": {"type": "map", "values": "int"}}
| ] }""".stripMargin
val avroSchema: Schema = {
val p = new Schema.Parser
p.parse(schemaString)
}
def apply(i: Int): AvroHBaseRecord = {
val user = new GenericData.Record(avroSchema);
user.put("name", s"name${"%03d".format(i)}")
user.put("favorite_number", i)
user.put("favorite_color", s"color${"%03d".format(i)}")
val favoriteArray = new GenericData.Array[String](2, avroSchema.getField("favorite_array").schema())
favoriteArray.add(s"number${i}")
favoriteArray.add(s"number${i+1}")
user.put("favorite_array", favoriteArray)
import collection.JavaConverters._
val favoriteMap = Map[String, Int](("key1" -> i), ("key2" -> (i+1))).asJava
user.put("favorite_map", favoriteMap)
val avroByte = AvroSedes.serialize(user, avroSchema)
AvroHBaseRecord(s"name${"%03d".format(i)}", avroByte)
}
}
val data = (0 to 255).map { i =>
AvroHBaseRecord(i)
}
----
`schemaString` is defined first, then it is parsed to get `avroSchema`. `avroSchema` is used to
generate `AvroHBaseRecord`. `data` prepared by users is a local Scala collection
which has 256 `AvroHBaseRecord` objects.
3) Save DataFrame:
[source, scala]
----
sc.parallelize(data).toDF.write.options(
Map(HBaseTableCatalog.tableCatalog -> catalog, HBaseTableCatalog.newTable -> "5"))
.format("org.apache.spark.sql.execution.datasources.hbase")
.save()
----
Given a data frame with specified schema `catalog`, above will create an HBase table with 5
regions and save the data frame inside.
4) Load the DataFrame
[source, scala]
----
def avroCatalog = s"""{
|"table":{"namespace":"default", "name":"avrotable"},
|"rowkey":"key",
|"columns":{
|"col0":{"cf":"rowkey", "col":"key", "type":"string"},
|"col1":{"cf":"cf1", "col":"col1", "avro":"avroSchema"}
|}
|}""".stripMargin
def withCatalog(cat: String): DataFrame = {
sqlContext
.read
.options(Map("avroSchema" -> AvroHBaseRecord.schemaString, HBaseTableCatalog.tableCatalog -> avroCatalog))
.format("org.apache.spark.sql.execution.datasources.hbase")
.load()
}
val df = withCatalog(catalog)
----
In `withCatalog` function, `read` returns a DataFrameReader that can be used to read data in as a DataFrame.
The `option` function adds input options for the underlying data source to the DataFrameReader.
There are two options: one is to set `avroSchema` as `AvroHBaseRecord.schemaString`, and one is to
set `HBaseTableCatalog.tableCatalog` as `avroCatalog`. The `load()` function loads input in as a DataFrame.
The date frame `df` returned by `withCatalog` function could be used to access the HBase table.
5) SQL Query
[source, scala]
----
df.registerTempTable("avrotable")
val c = sqlContext.sql("select count(1) from avrotable").
----
After loading df DataFrame, users can query data. registerTempTable registers df DataFrame
as a temporary table using the table name avrotable. `sqlContext.sql` function allows the
user to execute SQL queries.
====

View File

@ -1,42 +0,0 @@
////
/**
*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
////
[appendix]
[[sql]]
== SQL over HBase
:doctype: book
:numbered:
:toc: left
:icons: font
:experimental:
The following projects offer some support for SQL over HBase.
[[phoenix]]
=== Apache Phoenix
link:https://phoenix.apache.org[Apache Phoenix]
=== Trafodion
link:https://trafodion.incubator.apache.org/[Trafodion: Transactional SQL-on-HBase]
:numbered:

View File

@ -1,278 +0,0 @@
////
/**
*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
////
[[thrift]]
= Thrift API and Filter Language
:doctype: book
:numbered:
:toc: left
:icons: font
:experimental:
Apache link:https://thrift.apache.org/[Thrift] is a cross-platform, cross-language development framework.
HBase includes a Thrift API and filter language.
The Thrift API relies on client and server processes.
You can configure Thrift for secure authentication at the server and client side, by following the procedures in <<security.client.thrift>> and <<security.gateway.thrift>>.
The rest of this chapter discusses the filter language provided by the Thrift API.
[[thrift.filter_language]]
== Filter Language
Thrift Filter Language was introduced in HBase 0.92.
It allows you to perform server-side filtering when accessing HBase over Thrift or in the HBase shell.
You can find out more about shell integration by using the `scan help` command in the shell.
You specify a filter as a string, which is parsed on the server to construct the filter.
[[general_syntax]]
=== General Filter String Syntax
A simple filter expression is expressed as a string:
----
“FilterName (argument, argument,... , argument)”
----
Keep the following syntax guidelines in mind.
* Specify the name of the filter followed by the comma-separated argument list in parentheses.
* If the argument represents a string, it should be enclosed in single quotes (`'`).
* Arguments which represent a boolean, an integer, or a comparison operator (such as <, >, or !=), should not be enclosed in quotes
* The filter name must be a single word.
All ASCII characters are allowed except for whitespace, single quotes and parentheses.
* The filter's arguments can contain any ASCII character.
If single quotes are present in the argument, they must be escaped by an additional preceding single quote.
=== Compound Filters and Operators
.Binary Operators
`AND`::
If the `AND` operator is used, the key-value must satisfy both filters.
`OR`::
If the `OR` operator is used, the key-value must satisfy at least one of the filters.
.Unary Operators
`SKIP`::
For a particular row, if any of the key-values fail the filter condition, the entire row is skipped.
`WHILE`::
For a particular row, key-values will be emitted until a key-value is reached that fails the filter condition.
.Compound Operators
====
You can combine multiple operators to create a hierarchy of filters, such as the following example:
[source]
----
(Filter1 AND Filter2) OR (Filter3 AND Filter4)
----
====
=== Order of Evaluation
. Parentheses have the highest precedence.
. The unary operators `SKIP` and `WHILE` are next, and have the same precedence.
. The binary operators follow. `AND` has highest precedence, followed by `OR`.
.Precedence Example
====
[source]
----
Filter1 AND Filter2 OR Filter
is evaluated as
(Filter1 AND Filter2) OR Filter3
----
[source]
----
Filter1 AND SKIP Filter2 OR Filter3
is evaluated as
(Filter1 AND (SKIP Filter2)) OR Filter3
----
====
You can use parentheses to explicitly control the order of evaluation.
=== Compare Operator
The following compare operators are provided:
. LESS (<)
. LESS_OR_EQUAL (<=)
. EQUAL (=)
. NOT_EQUAL (!=)
. GREATER_OR_EQUAL (>=)
. GREATER (>)
. NO_OP (no operation)
The client should use the symbols (<, <=, =, !=, >, >=) to express compare operators.
=== Comparator
A comparator can be any of the following:
. _BinaryComparator_ - This lexicographically compares against the specified byte array using Bytes.compareTo(byte[], byte[])
. _BinaryPrefixComparator_ - This lexicographically compares against a specified byte array.
It only compares up to the length of this byte array.
. _RegexStringComparator_ - This compares against the specified byte array using the given regular expression.
Only EQUAL and NOT_EQUAL comparisons are valid with this comparator
. _SubStringComparator_ - This tests if the given substring appears in a specified byte array.
The comparison is case insensitive.
Only EQUAL and NOT_EQUAL comparisons are valid with this comparator
The general syntax of a comparator is: `ComparatorType:ComparatorValue`
The ComparatorType for the various comparators is as follows:
. _BinaryComparator_ - binary
. _BinaryPrefixComparator_ - binaryprefix
. _RegexStringComparator_ - regexstring
. _SubStringComparator_ - substring
The ComparatorValue can be any value.
.Example ComparatorValues
. `binary:abc` will match everything that is lexicographically greater than "abc"
. `binaryprefix:abc` will match everything whose first 3 characters are lexicographically equal to "abc"
. `regexstring:ab*yz` will match everything that doesn't begin with "ab" and ends with "yz"
. `substring:abc123` will match everything that begins with the substring "abc123"
[[examplephpclientprogram]]
=== Example PHP Client Program that uses the Filter Language
[source,php]
----
<?
$_SERVER['PHP_ROOT'] = realpath(dirname(__FILE__).'/..');
require_once $_SERVER['PHP_ROOT'].'/flib/__flib.php';
flib_init(FLIB_CONTEXT_SCRIPT);
require_module('storage/hbase');
$hbase = new HBase('<server_name_running_thrift_server>', <port on which thrift server is running>);
$hbase->open();
$client = $hbase->getClient();
$result = $client->scannerOpenWithFilterString('table_name', "(PrefixFilter ('row2') AND (QualifierFilter (>=, 'binary:xyz'))) AND (TimestampsFilter ( 123, 456))");
$to_print = $client->scannerGetList($result,1);
while ($to_print) {
print_r($to_print);
$to_print = $client->scannerGetList($result,1);
}
$client->scannerClose($result);
?>
----
=== Example Filter Strings
* `"PrefixFilter ('Row') AND PageFilter (1) AND FirstKeyOnlyFilter ()"` will return all key-value pairs that match the following conditions:
+
. The row containing the key-value should have prefix _Row_
. The key-value must be located in the first row of the table
. The key-value pair must be the first key-value in the row
+
* `"(RowFilter (=, 'binary:Row 1') AND TimeStampsFilter (74689, 89734)) OR ColumnRangeFilter ('abc', true, 'xyz', false))"` will return all key-value pairs that match both the following conditions:
** The key-value is in a row having row key _Row 1_
** The key-value must have a timestamp of either 74689 or 89734.
** Or it must match the following condition:
*** The key-value pair must be in a column that is lexicographically >= abc and < xyz 
+
* `"SKIP ValueFilter (0)"` will skip the entire row if any of the values in the row is not 0
[[individualfiltersyntax]]
=== Individual Filter Syntax
KeyOnlyFilter::
This filter doesn't take any arguments.
It returns only the key component of each key-value.
FirstKeyOnlyFilter::
This filter doesn't take any arguments.
It returns only the first key-value from each row.
PrefixFilter::
This filter takes one argument a prefix of a row key.
It returns only those key-values present in a row that starts with the specified row prefix
ColumnPrefixFilter::
This filter takes one argument a column prefix.
It returns only those key-values present in a column that starts with the specified column prefix.
The column prefix must be of the form: `“qualifier”`.
MultipleColumnPrefixFilter::
This filter takes a list of column prefixes.
It returns key-values that are present in a column that starts with any of the specified column prefixes.
Each of the column prefixes must be of the form: `“qualifier”`.
ColumnCountGetFilter::
This filter takes one argument a limit.
It returns the first limit number of columns in the table.
PageFilter::
This filter takes one argument a page size.
It returns page size number of rows from the table.
ColumnPaginationFilter::
This filter takes two arguments a limit and offset.
It returns limit number of columns after offset number of columns.
It does this for all the rows.
InclusiveStopFilter::
This filter takes one argument a row key on which to stop scanning.
It returns all key-values present in rows up to and including the specified row.
TimeStampsFilter::
This filter takes a list of timestamps.
It returns those key-values whose timestamps matches any of the specified timestamps.
RowFilter::
This filter takes a compare operator and a comparator.
It compares each row key with the comparator using the compare operator and if the comparison returns true, it returns all the key-values in that row.
Family Filter::
This filter takes a compare operator and a comparator.
It compares each column family name with the comparator using the compare operator and if the comparison returns true, it returns all the Cells in that column family.
QualifierFilter::
This filter takes a compare operator and a comparator.
It compares each qualifier name with the comparator using the compare operator and if the comparison returns true, it returns all the key-values in that column.
ValueFilter::
This filter takes a compare operator and a comparator.
It compares each value with the comparator using the compare operator and if the comparison returns true, it returns that key-value.
DependentColumnFilter::
This filter takes two arguments a family and a qualifier.
It tries to locate this column in each row and returns all key-values in that row that have the same timestamp.
If the row doesn't contain the specified column none of the key-values in that row will be returned.
SingleColumnValueFilter::
This filter takes a column family, a qualifier, a compare operator and a comparator.
If the specified column is not found all the columns of that row will be emitted.
If the column is found and the comparison with the comparator returns true, all the columns of the row will be emitted.
If the condition fails, the row will not be emitted.
SingleColumnValueExcludeFilter::
This filter takes the same arguments and behaves same as SingleColumnValueFilter however, if the column is found and the condition passes, all the columns of the row will be emitted except for the tested column value.
ColumnRangeFilter::
This filter is used for selecting only those keys with columns that are between minColumn and maxColumn.
It also takes two boolean variables to indicate whether to include the minColumn and maxColumn or not.

View File

@ -1,57 +0,0 @@
////
/**
*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
////
[[tracing]]
= Tracing
:doctype: book
:numbered:
:toc: left
:icons: font
:experimental:
== Overview
HBase used to depend on the HTrace project for tracing. After the Apache HTrace project moved to the Attic/retired, we decided to move to https://opentelemetry.io[OpenTelemetry] in https://issues.apache.org/jira/browse/HBASE-22120[HBASE-22120].
The basic support for tracing has been done, where we added tracing for async client, rpc, region read/write/scan operation, and WAL. We use opentelemetry-api to implement the tracing support manually by code, as our code base is way too complicated to be instrumented through a java agent. But notice that you still need to attach the opentelemetry java agent to enable tracing. Please see the official site for https://opentelemetry.io/[OpenTelemetry] and the documentation for https://github.com/open-telemetry/opentelemetry-java-instrumentation[opentelemetry-java-instrumentation] for more details on how to properly configure opentelemetry instrumentation.
== Usage
=== Enable Tracing
See this section in hbase-env.sh
[source,shell]
----
# Uncomment to enable trace, you can change the options to use other exporters such as jaeger or
# zipkin. See https://github.com/open-telemetry/opentelemetry-java-instrumentation on how to
# configure exporters and other components through system properties.
# export HBASE_TRACE_OPTS="-Dotel.resource.attributes=service.name=HBase -Dotel.traces.exporter=logging otel.metrics.exporter=none"
----
Uncomment this line to enable tracing. The default config is to output the tracing data to log. Please see the documentation for https://github.com/open-telemetry/opentelemetry-java-instrumentation[opentelemetry-java-instrumentation] for more details on how to export tracing data to other tracing system such as OTel collector, jaeger or zipkin, what does the _service.name_ mean, and how to change the sampling rate, etc.
NOTE: The https://github.com/open-telemetry/opentelemetry-java/blob/v1.0.1/exporters/logging/src/main/java/io/opentelemetry/exporter/logging/LoggingSpanExporter.java[LoggingSpanExporter] uses java.util.logging(jul) for logging tracing data, and the logger is initialized in opentelemetry java agent, which seems to be ahead of our jul to slf4j bridge initialization, so it will always log the tracing data to console. We highly suggest that you use other tracing systems to collect and view tracing data instead of logging.
=== Performance Impact
According to the result in https://issues.apache.org/jira/browse/HBASE-25658[HBASE-25658], the performance impact is minimal. Of course the test cluster is not under heavy load, so if you find out that enabling tracing would impact the performance, try to lower the sampling rate. See documentation for configuring https://github.com/open-telemetry/opentelemetry-java/blob/main/sdk-extensions/autoconfigure/README.md#sampler[sampler] for more details.

File diff suppressed because it is too large Load Diff

View File

@ -1,331 +0,0 @@
////
/**
*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
////
[[unit.tests]]
= Unit Testing HBase Applications
:doctype: book
:numbered:
:toc: left
:icons: font
:experimental:
This chapter discusses unit testing your HBase application using JUnit, Mockito, MRUnit, and HBaseTestingUtility.
Much of the information comes from link:http://blog.cloudera.com/blog/2013/09/how-to-test-hbase-applications-using-popular-tools/[a community blog post about testing HBase applications].
For information on unit tests for HBase itself, see <<hbase.tests,hbase.tests>>.
== JUnit
HBase uses link:http://junit.org[JUnit] for unit tests
This example will add unit tests to the following example class:
[source,java]
----
public class MyHBaseDAO {
public static void insertRecord(Table.getTable(table), HBaseTestObj obj)
throws Exception {
Put put = createPut(obj);
table.put(put);
}
private static Put createPut(HBaseTestObj obj) {
Put put = new Put(Bytes.toBytes(obj.getRowKey()));
put.add(Bytes.toBytes("CF"), Bytes.toBytes("CQ-1"),
Bytes.toBytes(obj.getData1()));
put.add(Bytes.toBytes("CF"), Bytes.toBytes("CQ-2"),
Bytes.toBytes(obj.getData2()));
return put;
}
}
----
The first step is to add JUnit dependencies to your Maven POM file:
[source,xml]
----
<dependency>
<groupId>junit</groupId>
<artifactId>junit</artifactId>
<version>4.11</version>
<scope>test</scope>
</dependency>
----
Next, add some unit tests to your code.
Tests are annotated with `@Test`.
Here, the unit tests are in bold.
[source,java]
----
public class TestMyHbaseDAOData {
@Test
public void testCreatePut() throws Exception {
HBaseTestObj obj = new HBaseTestObj();
obj.setRowKey("ROWKEY-1");
obj.setData1("DATA-1");
obj.setData2("DATA-2");
Put put = MyHBaseDAO.createPut(obj);
assertEquals(obj.getRowKey(), Bytes.toString(put.getRow()));
assertEquals(obj.getData1(), Bytes.toString(put.get(Bytes.toBytes("CF"), Bytes.toBytes("CQ-1")).get(0).getValue()));
assertEquals(obj.getData2(), Bytes.toString(put.get(Bytes.toBytes("CF"), Bytes.toBytes("CQ-2")).get(0).getValue()));
}
}
----
These tests ensure that your `createPut` method creates, populates, and returns a `Put` object with expected values.
Of course, JUnit can do much more than this.
For an introduction to JUnit, see https://github.com/junit-team/junit/wiki/Getting-started.
[[mockito]]
== Mockito
Mockito is a mocking framework.
It goes further than JUnit by allowing you to test the interactions between objects without having to replicate the entire environment.
You can read more about Mockito at its project site, https://code.google.com/p/mockito/.
You can use Mockito to do unit testing on smaller units.
For instance, you can mock a `org.apache.hadoop.hbase.Server` instance or a `org.apache.hadoop.hbase.master.MasterServices` interface reference rather than a full-blown `org.apache.hadoop.hbase.master.HMaster`.
This example builds upon the example code in <<unit.tests,unit.tests>>, to test the `insertRecord` method.
First, add a dependency for Mockito to your Maven POM file.
[source,xml]
----
<dependency>
<groupId>org.mockito</groupId>
<artifactId>mockito-core</artifactId>
<version>2.1.0</version>
<scope>test</scope>
</dependency>
----
Next, add a `@RunWith` annotation to your test class, to direct it to use Mockito.
[source,java]
----
@RunWith(MockitoJUnitRunner.class)
public class TestMyHBaseDAO{
@Mock
Configuration config = HBaseConfiguration.create();
@Mock
Connection connection = ConnectionFactory.createConnection(config);
@Mock
private Table table;
@Captor
private ArgumentCaptor putCaptor;
@Test
public void testInsertRecord() throws Exception {
//return mock table when getTable is called
when(connection.getTable(TableName.valueOf("tablename")).thenReturn(table);
//create test object and make a call to the DAO that needs testing
HBaseTestObj obj = new HBaseTestObj();
obj.setRowKey("ROWKEY-1");
obj.setData1("DATA-1");
obj.setData2("DATA-2");
MyHBaseDAO.insertRecord(table, obj);
verify(table).put(putCaptor.capture());
Put put = putCaptor.getValue();
assertEquals(Bytes.toString(put.getRow()), obj.getRowKey());
assert(put.has(Bytes.toBytes("CF"), Bytes.toBytes("CQ-1")));
assert(put.has(Bytes.toBytes("CF"), Bytes.toBytes("CQ-2")));
assertEquals(Bytes.toString(put.get(Bytes.toBytes("CF"),Bytes.toBytes("CQ-1")).get(0).getValue()), "DATA-1");
assertEquals(Bytes.toString(put.get(Bytes.toBytes("CF"),Bytes.toBytes("CQ-2")).get(0).getValue()), "DATA-2");
}
}
----
This code populates `HBaseTestObj` with ``ROWKEY-1'', ``DATA-1'', ``DATA-2'' as values.
It then inserts the record into the mocked table.
The Put that the DAO would have inserted is captured, and values are tested to verify that they are what you expected them to be.
The key here is to manage Connection and Table instance creation outside the DAO.
This allows you to mock them cleanly and test Puts as shown above.
Similarly, you can now expand into other operations such as Get, Scan, or Delete.
== MRUnit
link:https://mrunit.apache.org/[Apache MRUnit] is a library that allows you to unit-test MapReduce jobs.
You can use it to test HBase jobs in the same way as other MapReduce jobs.
Given a MapReduce job that writes to an HBase table called `MyTest`, which has one column family called `CF`, the reducer of such a job could look like the following:
[source,java]
----
public class MyReducer extends TableReducer<Text, Text, ImmutableBytesWritable> {
public static final byte[] CF = "CF".getBytes();
public static final byte[] QUALIFIER = "CQ-1".getBytes();
public void reduce(Text key, Iterable<Text> values, Context context) throws IOException, InterruptedException {
//bunch of processing to extract data to be inserted, in our case, let's say we are simply
//appending all the records we receive from the mapper for this particular
//key and insert one record into HBase
StringBuffer data = new StringBuffer();
Put put = new Put(Bytes.toBytes(key.toString()));
for (Text val : values) {
data = data.append(val);
}
put.add(CF, QUALIFIER, Bytes.toBytes(data.toString()));
//write to HBase
context.write(new ImmutableBytesWritable(Bytes.toBytes(key.toString())), put);
}
}
----
To test this code, the first step is to add a dependency to MRUnit to your Maven POM file.
[source,xml]
----
<dependency>
<groupId>org.apache.mrunit</groupId>
<artifactId>mrunit</artifactId>
<version>1.0.0 </version>
<scope>test</scope>
</dependency>
----
Next, use the ReducerDriver provided by MRUnit, in your Reducer job.
[source,java]
----
public class MyReducerTest {
ReduceDriver<Text, Text, ImmutableBytesWritable, Writable> reduceDriver;
byte[] CF = "CF".getBytes();
byte[] QUALIFIER = "CQ-1".getBytes();
@Before
public void setUp() {
MyReducer reducer = new MyReducer();
reduceDriver = ReduceDriver.newReduceDriver(reducer);
}
@Test
public void testHBaseInsert() throws IOException {
String strKey = "RowKey-1", strValue = "DATA", strValue1 = "DATA1",
strValue2 = "DATA2";
List<Text> list = new ArrayList<Text>();
list.add(new Text(strValue));
list.add(new Text(strValue1));
list.add(new Text(strValue2));
//since in our case all that the reducer is doing is appending the records that the mapper
//sends it, we should get the following back
String expectedOutput = strValue + strValue1 + strValue2;
//Setup Input, mimic what mapper would have passed
//to the reducer and run test
reduceDriver.withInput(new Text(strKey), list);
//run the reducer and get its output
List<Pair<ImmutableBytesWritable, Writable>> result = reduceDriver.run();
//extract key from result and verify
assertEquals(Bytes.toString(result.get(0).getFirst().get()), strKey);
//extract value for CF/QUALIFIER and verify
Put a = (Put)result.get(0).getSecond();
String c = Bytes.toString(a.get(CF, QUALIFIER).get(0).getValue());
assertEquals(expectedOutput,c );
}
}
----
Your MRUnit test verifies that the output is as expected, the Put that is inserted into HBase has the correct value, and the ColumnFamily and ColumnQualifier have the correct values.
MRUnit includes a MapperDriver to test mapping jobs, and you can use MRUnit to test other operations, including reading from HBase, processing data, or writing to HDFS,
== Integration Testing with an HBase Mini-Cluster
HBase ships with HBaseTestingUtility, which makes it easy to write integration tests using a [firstterm]_mini-cluster_.
The first step is to add some dependencies to your Maven POM file.
Check the versions to be sure they are appropriate.
[source,xml]
----
<properties>
<hbase.version>2.0.0-SNAPSHOT</hbase.version>
</properties>
<dependencies>
<dependency>
<groupId>org.apache.hbase</groupId>
<artifactId>hbase-testing-util</artifactId>
<version>${hbase.version}</version>
<scope>test</scope>
</dependency>
</dependencies>
----
This code represents an integration test for the MyDAO insert shown in <<unit.tests,unit.tests>>.
[source,java]
----
public class MyHBaseIntegrationTest {
private static HBaseTestingUtility utility;
byte[] CF = "CF".getBytes();
byte[] CQ1 = "CQ-1".getBytes();
byte[] CQ2 = "CQ-2".getBytes();
@Before
public void setup() throws Exception {
utility = new HBaseTestingUtility();
utility.startMiniCluster();
}
@Test
public void testInsert() throws Exception {
Table table = utility.createTable(Bytes.toBytes("MyTest"), CF);
HBaseTestObj obj = new HBaseTestObj();
obj.setRowKey("ROWKEY-1");
obj.setData1("DATA-1");
obj.setData2("DATA-2");
MyHBaseDAO.insertRecord(table, obj);
Get get1 = new Get(Bytes.toBytes(obj.getRowKey()));
get1.addColumn(CF, CQ1);
Result result1 = table.get(get1);
assertEquals(Bytes.toString(result1.getRow()), obj.getRowKey());
assertEquals(Bytes.toString(result1.value()), obj.getData1());
Get get2 = new Get(Bytes.toBytes(obj.getRowKey()));
get2.addColumn(CF, CQ2);
Result result2 = table.get(get2);
assertEquals(Bytes.toString(result2.getRow()), obj.getRowKey());
assertEquals(Bytes.toString(result2.value()), obj.getData2());
}
}
----
This code creates an HBase mini-cluster and starts it.
Next, it creates a table called `MyTest` with one column family, `CF`.
A record is inserted, a Get is performed from the same table, and the insertion is verified.
NOTE: Starting the mini-cluster takes about 20-30 seconds, but that should be appropriate for integration testing.
See the paper at link:http://blog.sematext.com/2010/08/30/hbase-case-study-using-hbasetestingutility-for-local-testing-development/[HBase Case-Study: Using HBaseTestingUtility for Local Testing and
Development] (2010) for more information about HBaseTestingUtility.

View File

@ -1,838 +0,0 @@
////
/**
*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
////
[[upgrading]]
= Upgrading
:doctype: book
:numbered:
:toc: left
:icons: font
:experimental:
You cannot skip major versions when upgrading. If you are upgrading from version 0.98.x to 2.x, you must first go from 0.98.x to 1.2.x and then go from 1.2.x to 2.x.
Review <<configuration>>, in particular <<hadoop>>. Familiarize yourself with <<hbase_supported_tested_definitions>>.
[[hbase.versioning]]
== HBase version number and compatibility
[[hbase.versioning.post10]]
=== Aspirational Semantic Versioning
Starting with the 1.0.0 release, HBase is working towards link:http://semver.org/[Semantic Versioning] for its release versioning. In summary:
.Given a version number MAJOR.MINOR.PATCH, increment the:
* MAJOR version when you make incompatible API changes,
* MINOR version when you add functionality in a backwards-compatible manner, and
* PATCH version when you make backwards-compatible bug fixes.
* Additional labels for pre-release and build metadata are available as extensions to the MAJOR.MINOR.PATCH format.
[[hbase.versioning.compat]]
.Compatibility Dimensions
In addition to the usual API versioning considerations HBase has other compatibility dimensions that we need to consider.
.Client-Server wire protocol compatibility
* Allows updating client and server out of sync.
* We could only allow upgrading the server first. I.e. the server would be backward compatible to an old client, that way new APIs are OK.
* Example: A user should be able to use an old client to connect to an upgraded cluster.
.Server-Server protocol compatibility
* Servers of different versions can co-exist in the same cluster.
* The wire protocol between servers is compatible.
* Workers for distributed tasks, such as replication and log splitting, can co-exist in the same cluster.
* Dependent protocols (such as using ZK for coordination) will also not be changed.
* Example: A user can perform a rolling upgrade.
.File format compatibility
* Support file formats backward and forward compatible
* Example: File, ZK encoding, directory layout is upgraded automatically as part of an HBase upgrade. User can downgrade to the older version and everything will continue to work.
.Client API compatibility
* Allow changing or removing existing client APIs.
* An API needs to be deprecated for a whole major version before we will change/remove it.
** An example: An API was deprecated in 2.0.1 and will be marked for deletion in 4.0.0. On the other hand, an API deprecated in 2.0.0 can be removed in 3.0.0.
** Occasionally mistakes are made and internal classes are marked with a higher access level than they should. In these rare circumstances, we will accelerate the deprecation schedule to the next major version (i.e., deprecated in 2.2.x, marked `IA.Private` 3.0.0). Such changes are communicated and explained via release note in Jira.
* APIs available in a patch version will be available in all later patch versions. However, new APIs may be added which will not be available in earlier patch versions.
* New APIs introduced in a patch version will only be added in a source compatible way footnote:[See 'Source Compatibility' https://blogs.oracle.com/darcy/entry/kinds_of_compatibility]: i.e. code that implements public APIs will continue to compile.
** Example: A user using a newly deprecated API does not need to modify application code with HBase API calls until the next major version.
*
.Client Binary compatibility
* Client code written to APIs available in a given patch release can run unchanged (no recompilation needed) against the new jars of later patch versions.
* Client code written to APIs available in a given patch release might not run against the old jars from an earlier patch version.
** Example: Old compiled client code will work unchanged with the new jars.
* If a Client implements an HBase Interface, a recompile MAY be required upgrading to a newer minor version (See release notes
for warning about incompatible changes). All effort will be made to provide a default implementation so this case should not arise.
.Server-Side Limited API compatibility (taken from Hadoop)
* Internal APIs are marked as Stable, Evolving, or Unstable
* This implies binary compatibility for coprocessors and plugins (pluggable classes, including replication) as long as these are only using marked interfaces/classes.
* Example: Old compiled Coprocessor, Filter, or Plugin code will work unchanged with the new jars.
.Dependency Compatibility
* An upgrade of HBase will not require an incompatible upgrade of a dependent project, except for Apache Hadoop.
* An upgrade of HBase will not require an incompatible upgrade of the Java runtime.
* Example: Upgrading HBase to a version that supports _Dependency Compatibility_ won't require that you upgrade your Apache ZooKeeper service.
* Example: If your current version of HBase supported running on JDK 8, then an upgrade to a version that supports _Dependency Compatibility_ will also run on JDK 8.
.Hadoop Versions
[TIP]
====
Previously, we tried to maintain dependency compatibility for the underly Hadoop service but over the last few years this has proven untenable. While the HBase project attempts to maintain support for older versions of Hadoop, we drop the "supported" designator for minor versions that fail to continue to see releases. Additionally, the Hadoop project has its own set of compatibility guidelines, which means in some cases having to update to a newer supported minor release might break some of our compatibility promises.
====
.Operational Compatibility
* Metric changes
* Behavioral changes of services
* JMX APIs exposed via the `/jmx/` endpoint
.Summary
* A patch upgrade is a drop-in replacement. Any change that is not Java binary and source compatible would not be allowed.footnote:[See http://docs.oracle.com/javase/specs/jls/se7/html/jls-13.html.] Downgrading versions within patch releases may not be compatible.
* A minor upgrade requires no application/client code modification. Ideally it would be a drop-in replacement but client code, coprocessors, filters, etc might have to be recompiled if new jars are used.
* A major upgrade allows the HBase community to make breaking changes.
.Compatibility Matrix footnote:[Note that this indicates what could break, not that it will break. We will/should add specifics in our release notes.]
[cols="1,1,1,1"]
|===
| | Major | Minor | Patch
|Client-Server wire Compatibility| N |Y |Y
|Server-Server Compatibility |N |Y |Y
|File Format Compatibility | N footnote:[comp_matrix_offline_upgrade_note,Running an offline upgrade tool without downgrade might be needed. We will typically only support migrating data from major version X to major version X+1.] | Y |Y
|Client API Compatibility | N | Y |Y
|Client Binary Compatibility | N | N |Y
4+|Server-Side Limited API Compatibility
>| Stable | N | Y | Y
>| Evolving | N |N |Y
>| Unstable | N |N |N
|Dependency Compatibility | N |Y |Y
|Operational Compatibility | N |N |Y
|===
[[hbase.client.api.surface]]
==== HBase API Surface
HBase has a lot of API points, but for the compatibility matrix above, we differentiate between Client API, Limited Private API, and Private API. HBase uses link:https://yetus.apache.org/documentation/in-progress/interface-classification/[Apache Yetus Audience Annotations] to guide downstream expectations for stability.
* InterfaceAudience (link:https://yetus.apache.org/documentation/in-progress/javadocs/org/apache/yetus/audience/InterfaceAudience.html[javadocs]): captures the intended audience, possible values include:
- Public: safe for end users and external projects
- LimitedPrivate: used for internals we expect to be pluggable, such as coprocessors
- Private: strictly for use within HBase itself
Classes which are defined as `IA.Private` may be used as parameters or return values for interfaces which are declared `IA.LimitedPrivate`. Treat the `IA.Private` object as opaque; do not try to access its methods or fields directly.
* InterfaceStability (link:https://yetus.apache.org/documentation/in-progress/javadocs/org/apache/yetus/audience/InterfaceStability.html[javadocs]): describes what types of interface changes are permitted. Possible values include:
- Stable: the interface is fixed and is not expected to change
- Evolving: the interface may change in future minor verisons
- Unstable: the interface may change at any time
Please keep in mind the following interactions between the `InterfaceAudience` and `InterfaceStability` annotations within the HBase project:
* `IA.Public` classes are inherently stable and adhere to our stability guarantees relating to the type of upgrade (major, minor, or patch).
* `IA.LimitedPrivate` classes should always be annotated with one of the given `InterfaceStability` values. If they are not, you should presume they are `IS.Unstable`.
* `IA.Private` classes should be considered implicitly unstable, with no guarantee of stability between releases.
[[hbase.client.api]]
HBase Client API::
HBase Client API consists of all the classes or methods that are marked with InterfaceAudience.Public interface. All main classes in hbase-client and dependent modules have either InterfaceAudience.Public, InterfaceAudience.LimitedPrivate, or InterfaceAudience.Private marker. Not all classes in other modules (hbase-server, etc) have the marker. If a class is not annotated with one of these, it is assumed to be a InterfaceAudience.Private class.
[[hbase.limitetprivate.api]]
HBase LimitedPrivate API::
LimitedPrivate annotation comes with a set of target consumers for the interfaces. Those consumers are coprocessors, phoenix, replication endpoint implementations or similar. At this point, HBase only guarantees source and binary compatibility for these interfaces between patch versions.
[[hbase.private.api]]
HBase Private API::
All classes annotated with InterfaceAudience.Private or all classes that do not have the annotation are for HBase internal use only. The interfaces and method signatures can change at any point in time. If you are relying on a particular interface that is marked Private, you should open a jira to propose changing the interface to be Public or LimitedPrivate, or an interface exposed for this purpose.
[[hbase.binary.compatibility]]
.Binary Compatibility
When we say two HBase versions are compatible, we mean that the versions are wire and binary compatible. Compatible HBase versions means that clients can talk to compatible but differently versioned servers. It means too that you can just swap out the jars of one version and replace them with the jars of another, compatible version and all will just work. Unless otherwise specified, HBase point versions are (mostly) binary compatible. You can safely do rolling upgrades between binary compatible versions; i.e. across maintenance releases: e.g. from 1.4.4 to 1.4.6. See link:[Does compatibility between versions also mean binary compatibility?] discussion on the HBase dev mailing list.
[[hbase.rolling.upgrade]]
=== Rolling Upgrades
A rolling upgrade is the process by which you update the servers in your cluster a server at a time. You can rolling upgrade across HBase versions if they are binary or wire compatible. See <<hbase.rolling.restart>> for more on what this means. Coarsely, a rolling upgrade is a graceful stop each server, update the software, and then restart. You do this for each server in the cluster. Usually you upgrade the Master first and then the RegionServers. See <<rolling>> for tools that can help use the rolling upgrade process.
For example, in the below, HBase was symlinked to the actual HBase install. On upgrade, before running a rolling restart over the cluster, we changed the symlink to point at the new HBase software version and then ran
[source,bash]
----
$ HADOOP_HOME=~/hadoop-2.6.0-CRC-SNAPSHOT ~/hbase/bin/rolling-restart.sh --config ~/conf_hbase
----
The rolling-restart script will first gracefully stop and restart the master, and then each of the RegionServers in turn. Because the symlink was changed, on restart the server will come up using the new HBase version. Check logs for errors as the rolling upgrade proceeds.
[[hbase.rolling.restart]]
.Rolling Upgrade Between Versions that are Binary/Wire Compatible
Unless otherwise specified, HBase minor versions are binary compatible. You can do a <<hbase.rolling.upgrade>> between HBase point versions. For example, you can go to 1.4.4 from 1.4.6 by doing a rolling upgrade across the cluster replacing the 1.4.4 binary with a 1.4.6 binary.
In the minor version-particular sections below, we call out where the versions are wire/protocol compatible and in this case, it is also possible to do a <<hbase.rolling.upgrade>>.
== Rollback
Sometimes things don't go as planned when attempting an upgrade. This section explains how to perform a _rollback_ to an earlier HBase release. Note that this should only be needed between Major and some Minor releases. You should always be able to _downgrade_ between HBase Patch releases within the same Minor version. These instructions may require you to take steps before you start the upgrade process, so be sure to read through this section beforehand.
=== Caveats
.Rollback vs Downgrade
This section describes how to perform a _rollback_ on an upgrade between HBase minor and major versions. In this document, rollback refers to the process of taking an upgraded cluster and restoring it to the old version _while losing all changes that have occurred since upgrade_. By contrast, a cluster _downgrade_ would restore an upgraded cluster to the old version while maintaining any data written since the upgrade. We currently only offer instructions to rollback HBase clusters. Further, rollback only works when these instructions are followed prior to performing the upgrade.
When these instructions talk about rollback vs downgrade of prerequisite cluster services (i.e. HDFS), you should treat leaving the service version the same as a degenerate case of downgrade.
.Replication
Unless you are doing an all-service rollback, the HBase cluster will lose any configured peers for HBase replication. If your cluster is configured for HBase replication, then prior to following these instructions you should document all replication peers. After performing the rollback you should then add each documented peer back to the cluster. For more information on enabling HBase replication, listing peers, and adding a peer see <<hbase.replication.management>>. Note also that data written to the cluster since the upgrade may or may not have already been replicated to any peers. Determining which, if any, peers have seen replication data as well as rolling back the data in those peers is out of the scope of this guide.
.Data Locality
Unless you are doing an all-service rollback, going through a rollback procedure will likely destroy all locality for Region Servers. You should expect degraded performance until after the cluster has had time to go through compactions to restore data locality. Optionally, you can force a compaction to speed this process up at the cost of generating cluster load.
.Configurable Locations
The instructions below assume default locations for the HBase data directory and the HBase znode. Both of these locations are configurable and you should verify the value used in your cluster before proceeding. In the event that you have a different value, just replace the default with the one found in your configuration
* HBase data directory is configured via the key 'hbase.rootdir' and has a default value of '/hbase'.
* HBase znode is configured via the key 'zookeeper.znode.parent' and has a default value of '/hbase'.
=== All service rollback
If you will be performing a rollback of both the HDFS and ZooKeeper services, then HBase's data will be rolled back in the process.
.Requirements
* Ability to rollback HDFS and ZooKeeper
.Before upgrade
No additional steps are needed pre-upgrade. As an extra precautionary measure, you may wish to use distcp to back up the HBase data off of the cluster to be upgraded. To do so, follow the steps in the 'Before upgrade' section of 'Rollback after HDFS downgrade' but copy to another HDFS instance instead of within the same instance.
.Performing a rollback
. Stop HBase
. Perform a rollback for HDFS and ZooKeeper (HBase should remain stopped)
. Change the installed version of HBase to the previous version
. Start HBase
. Verify HBase contents—use the HBase shell to list tables and scan some known values.
=== Rollback after HDFS rollback and ZooKeeper downgrade
If you will be rolling back HDFS but going through a ZooKeeper downgrade, then HBase will be in an inconsistent state. You must ensure the cluster is not started until you complete this process.
.Requirements
* Ability to rollback HDFS
* Ability to downgrade ZooKeeper
.Before upgrade
No additional steps are needed pre-upgrade. As an extra precautionary measure, you may wish to use distcp to back up the HBase data off of the cluster to be upgraded. To do so, follow the steps in the 'Before upgrade' section of 'Rollback after HDFS downgrade' but copy to another HDFS instance instead of within the same instance.
.Performing a rollback
. Stop HBase
. Perform a rollback for HDFS and a downgrade for ZooKeeper (HBase should remain stopped)
. Change the installed version of HBase to the previous version
. Clean out ZooKeeper information related to HBase. WARNING: This step will permanently destroy all replication peers. Please see the section on HBase Replication under Caveats for more information.
+
.Clean HBase information out of ZooKeeper
[source,bash]
----
[hpnewton@gateway_node.example.com ~]$ zookeeper-client -server zookeeper1.example.com:2181,zookeeper2.example.com:2181,zookeeper3.example.com:2181
Welcome to ZooKeeper!
JLine support is disabled
rmr /hbase
quit
Quitting...
----
. Start HBase
. Verify HBase contents—use the HBase shell to list tables and scan some known values.
=== Rollback after HDFS downgrade
If you will be performing an HDFS downgrade, then you'll need to follow these instructions regardless of whether ZooKeeper goes through rollback, downgrade, or reinstallation.
.Requirements
* Ability to downgrade HDFS
* Pre-upgrade cluster must be able to run MapReduce jobs
* HDFS super user access
* Sufficient space in HDFS for at least two copies of the HBase data directory
.Before upgrade
Before beginning the upgrade process, you must take a complete backup of HBase's backing data. The following instructions cover backing up the data within the current HDFS instance. Alternatively, you can use the distcp command to copy the data to another HDFS cluster.
. Stop the HBase cluster
. Copy the HBase data directory to a backup location using the https://hadoop.apache.org/docs/current/hadoop-distcp/DistCp.html[distcp command] as the HDFS super user (shown below on a security enabled cluster)
+
.Using distcp to backup the HBase data directory
[source,bash]
----
[hpnewton@gateway_node.example.com ~]$ kinit -k -t hdfs.keytab hdfs@EXAMPLE.COM
[hpnewton@gateway_node.example.com ~]$ hadoop distcp /hbase /hbase-pre-upgrade-backup
----
. Distcp will launch a mapreduce job to handle copying the files in a distributed fashion. Check the output of the distcp command to ensure this job completed successfully.
.Performing a rollback
. Stop HBase
. Perform a downgrade for HDFS and a downgrade/rollback for ZooKeeper (HBase should remain stopped)
. Change the installed version of HBase to the previous version
. Restore the HBase data directory from prior to the upgrade as the HDFS super user (shown below on a security enabled cluster). If you backed up your data on another HDFS cluster instead of locally, you will need to use the distcp command to copy it back to the current HDFS cluster.
+
.Restore the HBase data directory
[source,bash]
----
[hpnewton@gateway_node.example.com ~]$ kinit -k -t hdfs.keytab hdfs@EXAMPLE.COM
[hpnewton@gateway_node.example.com ~]$ hdfs dfs -mv /hbase /hbase-upgrade-rollback
[hpnewton@gateway_node.example.com ~]$ hdfs dfs -mv /hbase-pre-upgrade-backup /hbase
----
. Clean out ZooKeeper information related to HBase. WARNING: This step will permanently destroy all replication peers. Please see the section on HBase Replication under Caveats for more information.
+
.Clean HBase information out of ZooKeeper
[source,bash]
----
[hpnewton@gateway_node.example.com ~]$ zookeeper-client -server zookeeper1.example.com:2181,zookeeper2.example.com:2181,zookeeper3.example.com:2181
Welcome to ZooKeeper!
JLine support is disabled
rmr /hbase
quit
Quitting...
----
. Start HBase
. Verify HBase contentsuse the HBase shell to list tables and scan some known values.
== Upgrade Paths
[[upgrade2.3]]
=== Upgrade from 2.0.x-2.2.x to 2.3+
There is no special consideration upgrading to hbase-2.3.x from earlier versions. From 2.2.x, it should be
rolling upgradeable. From 2.1.x or 2.0.x, you will need to clear the <<upgrade2.2>> hurdle first.
[[upgrade2.3_zookeeper]]
==== Upgraded ZooKeeper Dependency Version
Our dependency on Apache ZooKeeper has been upgraded to 3.5.7
(https://issues.apache.org/jira/browse/HBASE-24132[HBASE-24132]), as 3.4.x is EOL. The newer 3.5.x
client is compatible with the older 3.4.x server. However, if you're using HBase in stand-alone
mode and perform an in-place upgrade, there are some upgrade steps
https://cwiki.apache.org/confluence/display/ZOOKEEPER/Upgrade+FAQ[documented by the ZooKeeper community].
This doesn't impact a production deployment, but would impact a developer's local environment.
[[upgrade2.3_in-master-procedure-store-region]]
==== New In-Master Procedure Store
Of note, HBase 2.3.0 changes the in-Master Procedure Store implementation. It was a dedicated custom store
(see <<master.wal>>) to instead use a standard HBase Region (https://issues.apache.org/jira/browse/HBASE-23326[HBASE-23326]).
The migration from the old to new format is automatic run by the new 2.3.0 Master on startup. The old _MasterProcWALs_
dir which hosted the old custom implementation files in _${hbase.rootdir}_ is deleted on successful
migration. A new _MasterProc_ sub-directory replaces it to host the Store files and WALs for the new
Procedure Store in-Master Region. The in-Master Region is unusual in that it writes to an
alternate location at _${hbase.rootdir}/MasterProc_ rather than under _${hbase.rootdir}/data_ in the
filesystem and the special Procedure Store in-Master Region is hidden from all clients other than the active
Master itself. Otherwise, it is like any other with the Master process running flushes and compactions,
archiving WALs when over-flushed, and so on. Its files are readable by standard Region and Store file
tooling for triage and analysis as long as they are pointed to the appropriate location in the filesystem.
[[upgrade2.2]]
=== Upgrade from 2.0 or 2.1 to 2.2+
HBase 2.2+ uses a new Procedure form assiging/unassigning/moving Regions. It does not process HBase 2.1 and 2.0's Unassign/Assign Procedure types. Upgrade requires that we first drain the Master Procedure Store of old style Procedures before starting the new 2.2 Master. So you need to make sure that before you kill the old version (2.0 or 2.1) Master, there is no region in transition. And once the new version (2.2+) Master is up, you can rolling upgrade RegionServers one by one.
And there is a more safer way if you are running 2.1.1+ or 2.0.3+ cluster. It need four steps to upgrade Master.
. Shutdown both active and standby Masters (Your cluster will continue to server reads and writes without interruption).
. Set the property hbase.procedure.upgrade-to-2-2 to true in hbase-site.xml for the Master, and start only one Master, still using the 2.1.1+ (or 2.0.3+) version.
. Wait until the Master quits. Confirm that there is a 'READY TO ROLLING UPGRADE' message in the Master log as the cause of the shutdown. The Procedure Store is now empty.
. Start new Masters with the new 2.2+ version.
Then you can rolling upgrade RegionServers one by one. See link:https://issues.apache.org/jira/browse/HBASE-21075[HBASE-21075] for more details.
[[upgrade2.0]]
=== Upgrading from 1.x to 2.x
In this section we will first call out significant changes compared to the prior stable HBase release and then go over the upgrade process. Be sure to read the former with care so you avoid suprises.
==== Changes of Note!
First we'll cover deployment / operational changes that you might hit when upgrading to HBase 2.0+. After that we'll call out changes for downstream applications. Please note that Coprocessors are covered in the operational section. Also note that this section is not meant to convey information about new features that may be of interest to you. For a complete summary of changes, please see the CHANGES.txt file in the source release artifact for the version you are planning to upgrade to.
[[upgrade2.0.basic.requirements]]
.Update to basic prerequisite minimums in HBase 2.0+
As noted in the section <<basic.prerequisites>>, HBase 2.0+ requires a minimum of Java 8 and Hadoop 2.6. The HBase community recommends ensuring you have already completed any needed upgrades in prerequisites prior to upgrading your HBase version.
[[upgrade2.0.hbck]]
.HBCK must match HBase server version
You *must not* use an HBase 1.x version of HBCK against an HBase 2.0+ cluster. HBCK is strongly tied to the HBase server version. Using the HBCK tool from an earlier release against an HBase 2.0+ cluster will destructively alter said cluster in unrecoverable ways.
As of HBase 2.0, HBCK (A.K.A _HBCK1_ or _hbck1_) is a read-only tool that can report the status of some non-public system internals but will often misread state because it does not understand the workings of hbase2.
To read about HBCK's replacement, see <<HBCK2>> in <<ops_mgt>>.
IMPORTANT: Related, before you upgrade, ensure that _hbck1_ reports no `INCONSISTENCIES`. Fixing hbase1-type inconsistencies post-upgrade is an involved process.
////
Link to a ref guide section on HBCK in 2.0 that explains use and calls out the inability of clients and server sides to detect version of each other.
////
[[upgrade2.0.removed.configs]]
.Configuration settings no longer in HBase 2.0+
The following configuration settings are no longer applicable or available. For details, please see the detailed release notes.
* hbase.config.read.zookeeper.config (see <<upgrade2.0.zkconfig>> for migration details)
* hbase.zookeeper.useMulti (HBase now always uses ZK's multi functionality)
* hbase.rpc.client.threads.max
* hbase.rpc.client.nativetransport
* hbase.fs.tmp.dir
// These next two seem worth a call out section?
* hbase.bucketcache.combinedcache.enabled
* hbase.bucketcache.ioengine no longer supports the 'heap' value.
* hbase.bulkload.staging.dir
* hbase.balancer.tablesOnMaster wasn't removed, strictly speaking, but its meaning has fundamentally changed and users should not set it. See the section <<upgrade2.0.regions.on.master>> for details.
* hbase.master.distributed.log.replay See the section <<upgrade2.0.distributed.log.replay>> for details
* hbase.regionserver.disallow.writes.when.recovering See the section <<upgrade2.0.distributed.log.replay>> for details
* hbase.regionserver.wal.logreplay.batch.size See the section <<upgrade2.0.distributed.log.replay>> for details
* hbase.master.catalog.timeout
* hbase.regionserver.catalog.timeout
* hbase.metrics.exposeOperationTimes
* hbase.metrics.showTableName
* hbase.online.schema.update.enable (HBase now always supports this)
* hbase.thrift.htablepool.size.max
[[upgrade2.0.renamed.configs]]
.Configuration properties that were renamed in HBase 2.0+
The following properties have been renamed. Attempts to set the old property will be ignored at run time.
.Renamed properties
[options="header"]
|============================================================================================================
|Old name |New name
|hbase.rpc.server.nativetransport |hbase.netty.nativetransport
|hbase.netty.rpc.server.worker.count |hbase.netty.worker.count
|hbase.hfile.compactions.discharger.interval |hbase.hfile.compaction.discharger.interval
|hbase.hregion.percolumnfamilyflush.size.lower.bound |hbase.hregion.percolumnfamilyflush.size.lower.bound.min
|============================================================================================================
[[upgrade2.0.changed.defaults]]
.Configuration settings with different defaults in HBase 2.0+
The following configuration settings changed their default value. Where applicable, the value to set to restore the behavior of HBase 1.2 is given.
* hbase.security.authorization now defaults to false. set to true to restore same behavior as previous default.
* hbase.client.retries.number is now set to 10. Previously it was 35. Downstream users are advised to use client timeouts as described in section <<config_timeouts>> instead.
* hbase.client.serverside.retries.multiplier is now set to 3. Previously it was 10. Downstream users are advised to use client timesout as describe in section <<config_timeouts>> instead.
* hbase.master.fileSplitTimeout is now set to 10 minutes. Previously it was 30 seconds.
* hbase.regionserver.logroll.multiplier is now set to 0.5. Previously it was 0.95. This change is tied with the following doubling of block size. Combined, these two configuration changes should make for WALs of about the same size as those in hbase-1.x but there should be less incidence of small blocks because we fail to roll the WAL before we hit the blocksize threshold. See link:https://issues.apache.org/jira/browse/HBASE-19148[HBASE-19148] for discussion.
* hbase.regionserver.hlog.blocksize defaults to 2x the HDFS default block size for the WAL dir. Previously it was equal to the HDFS default block size for the WAL dir.
* hbase.client.start.log.errors.counter changed to 5. Previously it was 9.
* hbase.ipc.server.callqueue.type changed to 'fifo'. In HBase versions 1.0 - 1.2 it was 'deadline'. In prior and later 1.x versions it already defaults to 'fifo'.
* hbase.hregion.memstore.chunkpool.maxsize is 1.0 by default. Previously it was 0.0. Effectively, this means previously we would not use a chunk pool when our memstore is onheap and now we will. See the section <<gcpause>> for more infromation about the MSLAB chunk pool.
* hbase.master.cleaner.interval is now set to 10 minutes. Previously it was 1 minute.
* hbase.master.procedure.threads will now default to 1/4 of the number of available CPUs, but not less than 16 threads. Previously it would be number of threads equal to number of CPUs.
* hbase.hstore.blockingStoreFiles is now 16. Previously it was 10.
* hbase.http.max.threads is now 16. Previously it was 10.
* hbase.client.max.perserver.tasks is now 2. Previously it was 5.
* hbase.normalizer.period is now 5 minutes. Previously it was 30 minutes.
* hbase.regionserver.region.split.policy is now SteppingSplitPolicy. Previously it was IncreasingToUpperBoundRegionSplitPolicy.
* replication.source.ratio is now 0.5. Previously it was 0.1.
[[upgrade2.0.regions.on.master]]
."Master hosting regions" feature broken and unsupported
The feature "Master acts as region server" and associated follow-on work available in HBase 1.y is non-functional in HBase 2.y and should not be used in a production setting due to deadlock on Master initialization. Downstream users are advised to treat related configuration settings as experimental and the feature as inappropriate for production settings.
A brief summary of related changes:
* Master no longer carries regions by default
* hbase.balancer.tablesOnMaster is a boolean, default false (if it holds an HBase 1.x list of tables, will default to false)
* hbase.balancer.tablesOnMaster.systemTablesOnly is boolean to keep user tables off master. default false
* those wishing to replicate old list-of-servers config should deploy a stand-alone RegionServer process and then rely on Region Server Groups
[[upgrade2.0.distributed.log.replay]]
."Distributed Log Replay" feature broken and removed
The Distributed Log Replay feature was broken and has been removed from HBase 2.y+. As a consequence all related configs, metrics, RPC fields, and logging have also been removed. Note that this feature was found to be unreliable in the run up to HBase 1.0, defaulted to being unused, and was effectively removed in HBase 1.2.0 when we started ignoring the config that turns it on (link:https://issues.apache.org/jira/browse/HBASE-14465[HBASE-14465]). If you are currently using the feature, be sure to perform a clean shutdown, ensure all DLR work is complete, and disable the feature prior to upgrading.
[[upgrade2.0.prefix-tree.removed]]
._prefix-tree_ encoding removed
The prefix-tree encoding was removed from HBase 2.0.0 (link:https://issues.apache.org/jira/browse/HBASE-19179[HBASE-19179]).
It was (late!) deprecated in hbase-1.2.7, hbase-1.4.0, and hbase-1.3.2.
This feature was removed because it as not being actively maintained. If interested in reviving this
sweet facility which improved random read latencies at the expensive of slowed writes,
write the HBase developers list at _dev at hbase dot apache dot org_.
The prefix-tree encoding needs to be removed from all tables before upgrading to HBase 2.0+.
To do that first you need to change the encoding from PREFIX_TREE to something else that is supported in HBase 2.0.
After that you have to major compact the tables that were using PREFIX_TREE encoding before.
To check which column families are using incompatible data block encoding you can use <<ops.pre-upgrade,Pre-Upgrade Validator>>.
[[upgrade2.0.metrics]]
.Changed metrics
The following metrics have changed names:
* Metrics previously published under the name "AssignmentManger" [sic] are now published under the name "AssignmentManager"
The following metrics have changed their meaning:
* The metric 'blockCacheEvictionCount' published on a per-region server basis no longer includes blocks removed from the cache due to the invalidation of the hfiles they are from (e.g. via compaction).
* The metric 'totalRequestCount' increments once per request; previously it incremented by the number of `Actions` carried in the request; e.g. if a request was a `multi` made of four Gets and two Puts, we'd increment 'totalRequestCount' by six; now we increment by one regardless. Expect to see lower values for this metric in hbase-2.0.0.
* The 'readRequestCount' now counts reads that return a non-empty row where in older hbases, we'd increment 'readRequestCount' whether a Result or not. This change will flatten the profile of the read-requests graphs if requests for non-existent rows. A YCSB read-heavy workload can do this dependent on how the database was loaded.
The following metrics have been removed:
* Metrics related to the Distributed Log Replay feature are no longer present. They were previsouly found in the region server context under the name 'replay'. See the section <<upgrade2.0.distributed.log.replay>> for details.
The following metrics have been added:
* 'totalRowActionRequestCount' is a count of region row actions summing reads and writes.
[[upgrade2.0.logging]]
.Changed logging
HBase-2.0.0 now uses link:https://www.slf4j.org/[slf4j] as its logging frontend.
Prevously, we used link:http://logging.apache.org/log4j/1.2/[log4j (1.2)].
For most the transition should be seamless; slf4j does a good job interpreting
_log4j.properties_ logging configuration files such that you should not notice
any difference in your log system emissions.
That said, your _log4j.properties_ may need freshening. See link:https://issues.apache.org/jira/browse/HBASE-20351[HBASE-20351]
for example, where a stale log configuration file manifest as netty configuration
being dumped at DEBUG level as preamble on every shell command invocation.
[[upgrade2.0.zkconfig]]
.ZooKeeper configs no longer read from zoo.cfg
HBase no longer optionally reads the 'zoo.cfg' file for ZooKeeper related configuration settings. If you previously relied on the 'hbase.config.read.zookeeper.config' config for this functionality, you should migrate any needed settings to the hbase-site.xml file while adding the prefix 'hbase.zookeeper.property.' to each property name.
[[upgrade2.0.permissions]]
.Changes in permissions
The following permission related changes either altered semantics or defaults:
* Permissions granted to a user now merge with existing permissions for that user, rather than over-writing them. (see link:https://issues.apache.org/jira/browse/HBASE-17472[the release note on HBASE-17472] for details)
* Region Server Group commands (added in 1.4.0) now require admin privileges.
[[upgrade2.0.admin.commands]]
.Most Admin APIs don't work against an HBase 2.0+ cluster from pre-HBase 2.0 clients
A number of admin commands are known to not work when used from a pre-HBase 2.0 client. This includes an HBase Shell that has the library jars from pre-HBase 2.0. You will need to plan for an outage of use of admin APIs and commands until you can also update to the needed client version.
The following client operations do not work against HBase 2.0+ cluster when executed from a pre-HBase 2.0 client:
* list_procedures
* split
* merge_region
* list_quotas
* enable_table_replication
* disable_table_replication
* Snapshot related commands
.Deprecated in 1.0 admin commands have been removed.
The following commands that were deprecated in 1.0 have been removed. Where applicable the replacement command is listed.
* The 'hlog' command has been removed. Downstream users should rely on the 'wal' command instead.
[[upgrade2.0.memory]]
.Region Server memory consumption changes.
Users upgrading from versions prior to HBase 1.4 should read the instructions in section <<upgrade1.4.memory>>.
Additionally, HBase 2.0 has changed how memstore memory is tracked for flushing decisions. Previously, both the data size and overhead for storage were used to calculate utilization against the flush threashold. Now, only data size is used to make these per-region decisions. Globally the addition of the storage overhead is used to make decisions about forced flushes.
[[upgrade2.0.ui.splitmerge.by.row]]
.Web UI for splitting and merging operate on row prefixes
Previously, the Web UI included functionality on table status pages to merge or split based on an encoded region name. In HBase 2.0, instead this functionality works by taking a row prefix.
[[upgrade2.0.replication]]
.Special upgrading for Replication users from pre-HBase 1.4
User running versions of HBase prior to the 1.4.0 release that make use of replication should be sure to read the instructions in the section <<upgrade1.4.replication>>.
[[upgrade2.0.shell]]
.HBase shell changes
The HBase shell command relies on a bundled JRuby instance. This bundled JRuby been updated from version 1.6.8 to version 9.1.10.0. The represents a change from Ruby 1.8 to Ruby 2.3.3, which introduces non-compatible language changes for user scripts.
The HBase shell command now ignores the '--return-values' flag that was present in early HBase 1.4 releases. Instead the shell always behaves as though that flag were passed. If you wish to avoid having expression results printed in the console you should alter your IRB configuration as noted in the section <<irbrc>>.
[[upgrade2.0.coprocessors]]
.Coprocessor APIs have changed in HBase 2.0+
All Coprocessor APIs have been refactored to improve supportability around binary API compatibility for future versions of HBase. If you or applications you rely on have custom HBase coprocessors, you should read link:https://issues.apache.org/jira/browse/HBASE-18169[the release notes for HBASE-18169] for details of changes you will need to make prior to upgrading to HBase 2.0+.
For example, if you had a BaseRegionObserver in HBase 1.2 then at a minimum you will need to update it to implement both RegionObserver and RegionCoprocessor and add the method
[source,java]
----
...
@Override
public Optional<RegionObserver> getRegionObserver() {
return Optional.of(this);
}
...
----
////
This would be a good place to link to a coprocessor migration guide
////
[[upgrade2.0.hfile3.only]]
.HBase 2.0+ can no longer write HFile v2 files.
HBase has simplified our internal HFile handling. As a result, we can no longer write HFile versions earlier than the default of version 3. Upgrading users should ensure that hfile.format.version is not set to 2 in hbase-site.xml before upgrading. Failing to do so will cause Region Server failure. HBase can still read HFiles written in the older version 2 format.
[[upgrade2.0.pb.wal.only]]
.HBase 2.0+ can no longer read Sequence File based WAL file.
HBase can no longer read the deprecated WAL files written in the Apache Hadoop Sequence File format. The hbase.regionserver.hlog.reader.impl and hbase.regionserver.hlog.reader.impl configuration entries should be set to use the Protobuf based WAL reader / writer classes. This implementation has been the default since HBase 0.96, so legacy WAL files should not be a concern for most downstream users.
A clean cluster shutdown should ensure there are no WAL files. If you are unsure of a given WAL file's format you can use the `hbase wal` command to parse files while the HBase cluster is offline. In HBase 2.0+, this command will not be able to read a Sequence File based WAL. For more information on the tool see the section <<hlog_tool.prettyprint>>.
[[upgrade2.0.filters]]
.Change in behavior for filters
The Filter ReturnCode NEXT_ROW has been redefined as skipping to next row in current family, not to next row in all family. its more reasonable, because ReturnCode is a concept in store level, not in region level.
[[upgrade2.0.shaded.client.preferred]]
.Downstream HBase 2.0+ users should use the shaded client
Downstream users are strongly urged to rely on the Maven coordinates org.apache.hbase:hbase-shaded-client for their runtime use. This artifact contains all the needed implementation details for talking to an HBase cluster while minimizing the number of third party dependencies exposed.
Note that this artifact exposes some classes in the org.apache.hadoop package space (e.g. o.a.h.configuration.Configuration) so that we can maintain source compatibility with our public API. Those classes are included so that they can be altered to use the same relocated third party dependencies as the rest of the HBase client code. In the event that you need to *also* use Hadoop in your code, you should ensure all Hadoop related jars precede the HBase client jar in your classpath.
[[upgrade2.0.mapreduce.module]]
.Downstream HBase 2.0+ users of MapReduce must switch to new artifact
Downstream users of HBase's integration for Apache Hadoop MapReduce must switch to relying on the org.apache.hbase:hbase-shaded-mapreduce module for their runtime use. Historically, downstream users relied on either the org.apache.hbase:hbase-server or org.apache.hbase:hbase-shaded-server artifacts for these classes. Both uses are no longer supported and in the vast majority of cases will fail at runtime.
Note that this artifact exposes some classes in the org.apache.hadoop package space (e.g. o.a.h.configuration.Configuration) so that we can maintain source compatibility with our public API. Those classes are included so that they can be altered to use the same relocated third party dependencies as the rest of the HBase client code. In the event that you need to *also* use Hadoop in your code, you should ensure all Hadoop related jars precede the HBase client jar in your classpath.
[[upgrade2.0.dependencies]]
.Significant changes to runtime classpath
A number of internal dependencies for HBase were updated or removed from the runtime classpath. Downstream client users who do not follow the guidance in <<upgrade2.0.shaded.client.preferred>> will have to examine the set of dependencies Maven pulls in for impact. Downstream users of LimitedPrivate Coprocessor APIs will need to examine the runtime environment for impact. For details on our new handling of third party libraries that have historically been a problem with respect to harmonizing compatible runtime versions, see the reference guide section <<thirdparty>>.
[[upgrade2.0.public.api]]
.Multiple breaking changes to source and binary compatibility for client API
The Java client API for HBase has a number of changes that break both source and binary compatibility for details see the Compatibility Check Report for the release you'll be upgrading to.
[[upgrade2.0.tracing]]
.Tracing implementation changes
The backing implementation of HBase's tracing features was updated from Apache HTrace 3 to HTrace 4, which includes several breaking changes. While HTrace 3 and 4 can coexist in the same runtime, they will not integrate with each other, leading to disjoint trace information.
The internal changes to HBase during this upgrade were sufficient for compilation, but it has not been confirmed that there are no regressions in tracing functionality. Please consider this feature experimental for the immediate future.
If you previously relied on client side tracing integrated with HBase operations, it is recommended that you upgrade your usage to HTrace 4 as well.
After the Apache HTrace project moved to the Attic/retired, the traces in HBase are left broken and unmaintained since HBase 2.0. A new project link:https://issues.apache.org/jira/browse/HBASE-22120[HBASE-22120] will replace HTrace with OpenTelemetry. It will be shipped in 3.0.0 release. Please see the reference guide section <<tracing>> for more details.
[[upgrade2.0.hfile.compatability]]
.HFile lose forward compatability
HFiles generated by 2.0.0, 2.0.1, 2.1.0 are not forward compatible to 1.4.6-, 1.3.2.1-, 1.2.6.1-,
and other inactive releases. Why HFile lose compatability is hbase in new versions
(2.0.0, 2.0.1, 2.1.0) use protobuf to serialize/deserialize TimeRangeTracker (TRT) while old
versions use DataInput/DataOutput. To solve this, We have to put
link:https://jira.apache.org/jira/browse/HBASE-21012[HBASE-21012]
to 2.x and put link:https://jira.apache.org/jira/browse/HBASE-21013[HBASE-21013] in 1.x.
For more information, please check
link:https://jira.apache.org/jira/browse/HBASE-21008[HBASE-21008].
[[upgrade2.0.perf]]
.Performance
You will likely see a change in the performance profile on upgrade to hbase-2.0.0 given
read and write paths have undergone significant change. On release, writes may be
slower with reads about the same or much better, dependent on context. Be prepared
to spend time re-tuning (See <<performance>>).
Performance is also an area that is now under active review so look forward to
improvement in coming releases (See
link:https://issues.apache.org/jira/browse/HBASE-20188[HBASE-20188 TESTING Performance]).
[[upgrade2.0.it.kerberos]]
.Integration Tests and Kerberos
Integration Tests (`IntegrationTests*`) used to rely on the Kerberos credential cache
for authentication against secured clusters. This used to lead to tests failing due
to authentication failures when the tickets in the credential cache expired.
As of hbase-2.0.0 (and hbase-1.3.0+), the integration test clients will make use
of the configuration properties `hbase.client.keytab.file` and
`hbase.client.kerberos.principal`. They are required. The clients will perform a
login from the configured keytab file and automatically refresh the credentials
in the background for the process lifetime (See
link:https://issues.apache.org/jira/browse/HBASE-16231[HBASE-16231]).
[[upgrade2.0.compaction.throughput.limit]]
.Default Compaction Throughput
HBase 2.x comes with default limits to the speed at which compactions can execute. This
limit is defined per RegionServer. In previous versions of HBase earlier than 1.5, there
was no limit to the speed at which a compaction could run by default. Applying a limit
to the throughput of a compaction should ensure more stable operations from RegionServers.
Take care to notice that this limit is _per RegionServer_, not _per compaction_.
The throughput limit is defined as a range of bytes written per second, and is
allowed to vary within the given lower and upper bound. RegionServers observe the
current throughput of a compaction and apply a linear formula to adjust the allowed
throughput, within the lower and upper bound, with respect to external pressure.
For compactions, external pressure is defined as the number of store files with
respect to the maximum number of allowed store files. The more store files, the
higher the compaction pressure.
Configuration of this throughput is governed by the following properties.
- The lower bound is defined by `hbase.hstore.compaction.throughput.lower.bound`
and defaults to 50 MB/s (`52428800`).
- The upper bound is defined by `hbase.hstore.compaction.throughput.higher.bound`
and defaults to 100 MB/s (`104857600`).
To revert this behavior to the unlimited compaction throughput of earlier versions
of HBase, please set the following property to the implementation that applies no
limits to compactions.
`hbase.regionserver.throughput.controller=org.apache.hadoop.hbase.regionserver.throttle.NoLimitThroughputController`
////
This would be a good place to link to an appendix on migrating applications
////
[[upgrade2.0.coprocessors]]
==== Upgrading Coprocessors to 2.0
Coprocessors have changed substantially in 2.0 ranging from top level design changes in class
hierarchies to changed/removed methods, interfaces, etc.
(Parent jira: link:https://issues.apache.org/jira/browse/HBASE-18169[HBASE-18169 Coprocessor fix
and cleanup before 2.0.0 release]). Some of the reasons for such widespread changes:
. Pass Interfaces instead of Implementations; e.g. TableDescriptor instead of HTableDescriptor and
Region instead of HRegion (link:https://issues.apache.org/jira/browse/HBASE-18241[HBASE-18241]
Change client.Table and client.Admin to not use HTableDescriptor).
. Design refactor so implementers need to fill out less boilerplate and so we can do more
compile-time checking (link:https://issues.apache.org/jira/browse/HBASE-17732[HBASE-17732])
. Purge Protocol Buffers from Coprocessor API
(link:https://issues.apache.org/jira/browse/HBASE-18859[HBASE-18859],
link:https://issues.apache.org/jira/browse/HBASE-16769[HBASE-16769], etc)
. Cut back on what we expose to Coprocessors removing hooks on internals that were too private to
expose (for eg. link:https://issues.apache.org/jira/browse/HBASE-18453[HBASE-18453]
CompactionRequest should not be exposed to user directly;
link:https://issues.apache.org/jira/browse/HBASE-18298[HBASE-18298] RegionServerServices Interface
cleanup for CP expose; etc)
To use coprocessors in 2.0, they should be rebuilt against new API otherwise they will fail to
load and HBase processes will die.
Suggested order of changes to upgrade the coprocessors:
. Directly implement observer interfaces instead of extending Base*Observer classes. Change
`Foo extends BaseXXXObserver` to `Foo implements XXXObserver`.
(link:https://issues.apache.org/jira/browse/HBASE-17312[HBASE-17312]).
. Adapt to design change from Inheritence to Composition
(link:https://issues.apache.org/jira/browse/HBASE-17732[HBASE-17732]) by following
link:https://github.com/apache/hbase/blob/master/dev-support/design-docs/Coprocessor_Design_Improvements-Use_composition_instead_of_inheritance-HBASE-17732.adoc#migrating-existing-cps-to-new-design[this
example].
. getTable() has been removed from the CoprocessorEnvrionment, coprocessors should self-manage
Table instances.
Some examples of writing coprocessors with new API can be found in hbase-example module
link:https://github.com/apache/hbase/tree/branch-2.0/hbase-examples/src/main/java/org/apache/hadoop/hbase/coprocessor/example[here] .
Lastly, if an api has been changed/removed that breaks you in an irreparable way, and if there's a
good justification to add it back, bring it our notice (dev@hbase.apache.org).
[[upgrade2.0.rolling.upgrades]]
==== Rolling Upgrade from 1.x to 2.x
Rolling upgrades are currently an experimental feature.
They have had limited testing. There are likely corner
cases as yet uncovered in our
limited experience so you should be careful if you go this
route. The stop/upgrade/start described in the next section,
<<upgrade2.0.process>>, is the safest route.
That said, the below is a prescription for a
rolling upgrade of a 1.4 cluster.
.Pre-Requirements
* Upgrade to the latest 1.4.x release. Pre 1.4 releases may also work but are not tested, so please upgrade to 1.4.3+ before upgrading to 2.x, unless you are an expert and familiar with the region assignment and crash processing. See the section <<upgrade1.4>> on how to upgrade to 1.4.x.
* Make sure that the zk-less assignment is enabled, i.e, set `hbase.assignment.usezk` to `false`. This is the most important thing. It allows the 1.x master to assign/unassign regions to/from 2.x region servers. See the release note section of link:https://issues.apache.org/jira/browse/HBASE-11059[HBASE-11059] on how to migrate from zk based assignment to zk less assignment.
* Before you upgrade, ensure that _hbck1_ reports no `INCONSISTENCIES`. Fixing hbase1-type inconsistencies post-upgrade is an involved process.
* We have tested rolling upgrading from 1.4.3 to 2.1.0, but it should also work if you want to upgrade to 2.0.x.
.Instructions
. Unload a region server and upgrade it to 2.1.0. With link:https://issues.apache.org/jira/browse/HBASE-17931[HBASE-17931] in place, the meta region and regions for other system tables will be moved to this region server immediately. If not, please move them manually to the new region server. This is very important because
** The schema of meta region is hard coded, if meta is on an old region server, then the new region servers can not access it as it does not have some families, for example, table state.
** Client with lower version can communicate with server with higher version, but not vice versa. If the meta region is on an old region server, the new region server will use a client with higher version to communicate with a server with lower version, this may introduce strange problems.
. Rolling upgrade all other region servers.
. Upgrading masters.
It is OK that during the rolling upgrading there are region server crashes. The 1.x master can assign regions to both 1.x and 2.x region servers, and link:https://issues.apache.org/jira/browse/HBASE-19166[HBASE-19166] fixed a problem so that 1.x region server can also read the WALs written by 2.x region server and split them.
NOTE: please read the <<Changes of Note!,Changes of Note!>> section carefully before rolling upgrading. Make sure that you do not use the removed features in 2.0, for example, the prefix-tree encoding, the old hfile format, etc. They could both fail the upgrading and leave the cluster in an intermediate state and hard to recover.
NOTE: If you have success running this prescription, please notify the dev list with a note on your experience and/or update the above with any deviations you may have taken so others going this route can benefit from your efforts.
[[upgrade2.0.process]]
==== Upgrade process from 1.x to 2.x
To upgrade an existing HBase 1.x cluster, you should:
* Ensure that _hbck1_ reports no `INCONSISTENCIES`. Fixing hbase1-type inconsistencies post-upgrade is an involved process. Fix all _hbck1_ complaints before proceeding.
* Clean shutdown of existing 1.x cluster
* Update coprocessors
* Upgrade Master roles first
* Upgrade RegionServers
* (Eventually) Upgrade Clients
[[upgrade1.4]]
=== Upgrading from pre-1.4 to 1.4+
[[upgrade1.4.memory]]
==== Region Server memory consumption changes.
Users upgrading from versions prior to HBase 1.4 should be aware that the estimates of heap usage by the memstore objects (KeyValue, object and array header sizes, etc) have been made more accurate for heap sizes up to 32G (using CompressedOops), resulting in them dropping by 10-50% in practice. This also results in less number of flushes and compactions due to "fatter" flushes. YMMV. As a result, the actual heap usage of the memstore before being flushed may increase by up to 100%. If configured memory limits for the region server had been tuned based on observed usage, this change could result in worse GC behavior or even OutOfMemory errors. Set the environment property (not hbase-site.xml) "hbase.memorylayout.use.unsafe" to false to disable.
[[upgrade1.4.replication]]
==== Replication peer's TableCFs config
Before 1.4, the table name can't include namespace for replication peer's TableCFs config. It was fixed by add TableCFs to ReplicationPeerConfig which was stored on Zookeeper. So when upgrade to 1.4, you have to update the original ReplicationPeerConfig data on Zookeeper firstly. There are four steps to upgrade when your cluster have a replication peer with TableCFs config.
* Disable the replication peer.
* If master has permission to write replication peer znode, then rolling update master directly. If not, use TableCFsUpdater tool to update the replication peer's config.
[source,bash]
----
$ bin/hbase org.apache.hadoop.hbase.replication.master.TableCFsUpdater update
----
* Rolling update regionservers.
* Enable the replication peer.
Notes:
* Can't use the old client(before 1.4) to change the replication peer's config. Because the client will write config to Zookeeper directly, the old client will miss TableCFs config. And the old client write TableCFs config to the old tablecfs znode, it will not work for new version regionserver.
[[upgrade1.4.rawscan]]
==== Raw scan now ignores TTL
Doing a raw scan will now return results that have expired according to TTL settings.
[[upgrade1.3]]
=== Upgrading from pre-1.3 to 1.3+
If running Integration Tests under Kerberos, see <<upgrade2.0.it.kerberos>>.
[[upgrade1.0]]
=== Upgrading to 1.x
Please consult the documentation published specifically for the version of HBase that you are upgrading to for details on the upgrade process.

View File

@ -1,43 +0,0 @@
////
/**
*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
////
[appendix]
[[ycsb]]
== YCSB
:doctype: book
:numbered:
:toc: left
:icons: font
:experimental:
link:https://github.com/brianfrankcooper/YCSB/[YCSB: The
Yahoo! Cloud Serving Benchmark] and HBase
TODO: Describe how YCSB is poor for putting up a decent cluster load.
TODO: Describe setup of YCSB for HBase.
In particular, presplit your tables before you start a run.
See link:https://issues.apache.org/jira/browse/HBASE-4163[HBASE-4163 Create Split Strategy for YCSB Benchmark] for why and a little shell command for how to do it.
Ted Dunning redid YCSB so it's mavenized and added facility for verifying workloads.
See link:https://github.com/tdunning/YCSB[Ted Dunning's YCSB].
:numbered:

View File

@ -1,450 +0,0 @@
////
/**
*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
////
[[zookeeper]]
= ZooKeeper(((ZooKeeper)))
:doctype: book
:numbered:
:toc: left
:icons: font
:experimental:
A distributed Apache HBase installation depends on a running ZooKeeper cluster.
All participating nodes and clients need to be able to access the running ZooKeeper ensemble.
Apache HBase by default manages a ZooKeeper "cluster" for you.
It will start and stop the ZooKeeper ensemble as part of the HBase start/stop process.
You can also manage the ZooKeeper ensemble independent of HBase and just point HBase at the cluster it should use.
To toggle HBase management of ZooKeeper, use the `HBASE_MANAGES_ZK` variable in _conf/hbase-env.sh_.
This variable, which defaults to `true`, tells HBase whether to start/stop the ZooKeeper ensemble servers as part of HBase start/stop.
When HBase manages the ZooKeeper ensemble, you can specify ZooKeeper configuration directly in _conf/hbase-site.xml_.
A ZooKeeper configuration option can be set as a property in the HBase _hbase-site.xml_ XML configuration file by prefacing the ZooKeeper option name with `hbase.zookeeper.property`.
For example, the `clientPort` setting in ZooKeeper can be changed by setting the `hbase.zookeeper.property.clientPort` property.
For all default values used by HBase, including ZooKeeper configuration, see <<hbase_default_configurations,hbase default configurations>>.
Look for the `hbase.zookeeper.property` prefix.
For the full list of ZooKeeper configurations, see ZooKeeper's _zoo.cfg_.
HBase does not ship with a _zoo.cfg_ so you will need to browse the _conf_ directory in an appropriate ZooKeeper download.
You must at least list the ensemble servers in _hbase-site.xml_ using the `hbase.zookeeper.quorum` property.
This property defaults to a single ensemble member at `localhost` which is not suitable for a fully distributed HBase.
(It binds to the local machine only and remote clients will not be able to connect).
.How many ZooKeepers should I run?
[NOTE]
====
You can run a ZooKeeper ensemble that comprises 1 node only but in production it is recommended that you run a ZooKeeper ensemble of 3, 5 or 7 machines; the more members an ensemble has, the more tolerant the ensemble is of host failures.
Also, run an odd number of machines.
In ZooKeeper, an even number of peers is supported, but it is normally not used because an even sized ensemble requires, proportionally, more peers to form a quorum than an odd sized ensemble requires.
For example, an ensemble with 4 peers requires 3 to form a quorum, while an ensemble with 5 also requires 3 to form a quorum.
Thus, an ensemble of 5 allows 2 peers to fail, and thus is more fault tolerant than the ensemble of 4, which allows only 1 down peer.
Give each ZooKeeper server around 1GB of RAM, and if possible, its own dedicated disk (A dedicated disk is the best thing you can do to ensure a performant ZooKeeper ensemble). For very heavily loaded clusters, run ZooKeeper servers on separate machines from RegionServers (DataNodes and TaskTrackers).
====
For example, to have HBase manage a ZooKeeper quorum on nodes _rs{1,2,3,4,5}.example.com_, bound to port 2222 (the default is 2181) ensure `HBASE_MANAGE_ZK` is commented out or set to `true` in _conf/hbase-env.sh_ and then edit _conf/hbase-site.xml_ and set `hbase.zookeeper.property.clientPort` and `hbase.zookeeper.quorum`.
You should also set `hbase.zookeeper.property.dataDir` to other than the default as the default has ZooKeeper persist data under _/tmp_ which is often cleared on system restart.
In the example below we have ZooKeeper persist to _/user/local/zookeeper_.
[source,java]
----
<configuration>
...
<property>
<name>hbase.zookeeper.property.clientPort</name>
<value>2222</value>
<description>Property from ZooKeeper's config zoo.cfg.
The port at which the clients will connect.
</description>
</property>
<property>
<name>hbase.zookeeper.quorum</name>
<value>rs1.example.com,rs2.example.com,rs3.example.com,rs4.example.com,rs5.example.com</value>
<description>Comma separated list of servers in the ZooKeeper Quorum.
For example, "host1.mydomain.com,host2.mydomain.com,host3.mydomain.com".
By default this is set to localhost for local and pseudo-distributed modes
of operation. For a fully-distributed setup, this should be set to a full
list of ZooKeeper quorum servers. If HBASE_MANAGES_ZK is set in hbase-env.sh
this is the list of servers which we will start/stop ZooKeeper on.
</description>
</property>
<property>
<name>hbase.zookeeper.property.dataDir</name>
<value>/usr/local/zookeeper</value>
<description>Property from ZooKeeper's config zoo.cfg.
The directory where the snapshot is stored.
</description>
</property>
...
</configuration>
----
.What version of ZooKeeper should I use?
[CAUTION]
====
The newer version, the better. ZooKeeper 3.4.x is required as of HBase 1.0.0
====
.ZooKeeper Maintenance
[CAUTION]
====
Be sure to set up the data dir cleaner described under link:https://zookeeper.apache.org/doc/r3.1.2/zookeeperAdmin.html#sc_maintenance[ZooKeeper
Maintenance] else you could have 'interesting' problems a couple of months in; i.e.
zookeeper could start dropping sessions if it has to run through a directory of hundreds of thousands of logs which is wont to do around leader reelection time -- a process rare but run on occasion whether because a machine is dropped or happens to hiccup.
====
== Using existing ZooKeeper ensemble
To point HBase at an existing ZooKeeper cluster, one that is not managed by HBase, set `HBASE_MANAGES_ZK` in _conf/hbase-env.sh_ to false
----
...
# Tell HBase whether it should manage its own instance of ZooKeeper or not.
export HBASE_MANAGES_ZK=false
----
Next set ensemble locations and client port, if non-standard, in _hbase-site.xml_.
When HBase manages ZooKeeper, it will start/stop the ZooKeeper servers as a part of the regular start/stop scripts.
If you would like to run ZooKeeper yourself, independent of HBase start/stop, you would do the following
----
${HBASE_HOME}/bin/hbase-daemons.sh {start,stop} zookeeper
----
Note that you can use HBase in this manner to spin up a ZooKeeper cluster, unrelated to HBase.
Just make sure to set `HBASE_MANAGES_ZK` to `false` if you want it to stay up across HBase restarts so that when HBase shuts down, it doesn't take ZooKeeper down with it.
For more information about running a distinct ZooKeeper cluster, see the ZooKeeper link:https://zookeeper.apache.org/doc/current/zookeeperStarted.html[Getting
Started Guide].
Additionally, see the link:https://cwiki.apache.org/confluence/display/HADOOP2/ZooKeeper+FAQ#ZooKeeperFAQ-7[ZooKeeper Wiki] or the link:https://zookeeper.apache.org/doc/r3.4.10/zookeeperAdmin.html#sc_zkMulitServerSetup[ZooKeeper
documentation] for more information on ZooKeeper sizing.
[[zk.sasl.auth]]
== SASL Authentication with ZooKeeper
Newer releases of Apache HBase (>= 0.92) will support connecting to a ZooKeeper Quorum that supports SASL authentication (which is available in ZooKeeper versions 3.4.0 or later).
This describes how to set up HBase to mutually authenticate with a ZooKeeper Quorum.
ZooKeeper/HBase mutual authentication (link:https://issues.apache.org/jira/browse/HBASE-2418[HBASE-2418]) is required as part of a complete secure HBase configuration (link:https://issues.apache.org/jira/browse/HBASE-3025[HBASE-3025]). For simplicity of explication, this section ignores additional configuration required (Secure HDFS and Coprocessor configuration). It's recommended to begin with an HBase-managed ZooKeeper configuration (as opposed to a standalone ZooKeeper quorum) for ease of learning.
=== Operating System Prerequisites
You need to have a working Kerberos KDC setup.
For each `$HOST` that will run a ZooKeeper server, you should have a principle `zookeeper/$HOST`.
For each such host, add a service key (using the `kadmin` or `kadmin.local` tool's `ktadd` command) for `zookeeper/$HOST` and copy this file to `$HOST`, and make it readable only to the user that will run zookeeper on `$HOST`.
Note the location of this file, which we will use below as _$PATH_TO_ZOOKEEPER_KEYTAB_.
Similarly, for each `$HOST` that will run an HBase server (master or regionserver), you should have a principle: `hbase/$HOST`.
For each host, add a keytab file called _hbase.keytab_ containing a service key for `hbase/$HOST`, copy this file to `$HOST`, and make it readable only to the user that will run an HBase service on `$HOST`.
Note the location of this file, which we will use below as _$PATH_TO_HBASE_KEYTAB_.
Each user who will be an HBase client should also be given a Kerberos principal.
This principal should usually have a password assigned to it (as opposed to, as with the HBase servers, a keytab file) which only this user knows.
The client's principal's `maxrenewlife` should be set so that it can be renewed enough so that the user can complete their HBase client processes.
For example, if a user runs a long-running HBase client process that takes at most 3 days, we might create this user's principal within `kadmin` with: `addprinc -maxrenewlife 3days`.
The ZooKeeper client and server libraries manage their own ticket refreshment by running threads that wake up periodically to do the refreshment.
On each host that will run an HBase client (e.g. `hbase shell`), add the following file to the HBase home directory's _conf_ directory:
[source,java]
----
Client {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=false
useTicketCache=true;
};
----
We'll refer to this JAAS configuration file as _$CLIENT_CONF_ below.
=== HBase-managed ZooKeeper Configuration
On each node that will run a zookeeper, a master, or a regionserver, create a link:http://docs.oracle.com/javase/7/docs/technotes/guides/security/jgss/tutorials/LoginConfigFile.html[JAAS] configuration file in the conf directory of the node's _HBASE_HOME_ directory that looks like the following:
[source,java]
----
Server {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
keyTab="$PATH_TO_ZOOKEEPER_KEYTAB"
storeKey=true
useTicketCache=false
principal="zookeeper/$HOST";
};
Client {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
useTicketCache=false
keyTab="$PATH_TO_HBASE_KEYTAB"
principal="hbase/$HOST";
};
----
where the _$PATH_TO_HBASE_KEYTAB_ and _$PATH_TO_ZOOKEEPER_KEYTAB_ files are what you created above, and `$HOST` is the hostname for that node.
The `Server` section will be used by the ZooKeeper quorum server, while the `Client` section will be used by the HBase master and regionservers.
The path to this file should be substituted for the text _$HBASE_SERVER_CONF_ in the _hbase-env.sh_ listing below.
The path to this file should be substituted for the text _$CLIENT_CONF_ in the _hbase-env.sh_ listing below.
Modify your _hbase-env.sh_ to include the following:
[source,bourne]
----
export HBASE_OPTS="-Djava.security.auth.login.config=$CLIENT_CONF"
export HBASE_MANAGES_ZK=true
export HBASE_ZOOKEEPER_OPTS="-Djava.security.auth.login.config=$HBASE_SERVER_CONF"
export HBASE_MASTER_OPTS="-Djava.security.auth.login.config=$HBASE_SERVER_CONF"
export HBASE_REGIONSERVER_OPTS="-Djava.security.auth.login.config=$HBASE_SERVER_CONF"
----
where _$HBASE_SERVER_CONF_ and _$CLIENT_CONF_ are the full paths to the JAAS configuration files created above.
Modify your _hbase-site.xml_ on each node that will run zookeeper, master or regionserver to contain:
[source,java]
----
<configuration>
<property>
<name>hbase.zookeeper.quorum</name>
<value>$ZK_NODES</value>
</property>
<property>
<name>hbase.cluster.distributed</name>
<value>true</value>
</property>
<property>
<name>hbase.zookeeper.property.authProvider.1</name>
<value>org.apache.zookeeper.server.auth.SASLAuthenticationProvider</value>
</property>
<property>
<name>hbase.zookeeper.property.kerberos.removeHostFromPrincipal</name>
<value>true</value>
</property>
<property>
<name>hbase.zookeeper.property.kerberos.removeRealmFromPrincipal</name>
<value>true</value>
</property>
</configuration>
----
where `$ZK_NODES` is the comma-separated list of hostnames of the ZooKeeper Quorum hosts.
Start your hbase cluster by running one or more of the following set of commands on the appropriate hosts:
----
bin/hbase zookeeper start
bin/hbase master start
bin/hbase regionserver start
----
=== External ZooKeeper Configuration
Add a JAAS configuration file that looks like:
[source,java]
----
Client {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
useTicketCache=false
keyTab="$PATH_TO_HBASE_KEYTAB"
principal="hbase/$HOST";
};
----
where the _$PATH_TO_HBASE_KEYTAB_ is the keytab created above for HBase services to run on this host, and `$HOST` is the hostname for that node.
Put this in the HBase home's configuration directory.
We'll refer to this file's full pathname as _$HBASE_SERVER_CONF_ below.
Modify your hbase-env.sh to include the following:
[source,bourne]
----
export HBASE_OPTS="-Djava.security.auth.login.config=$CLIENT_CONF"
export HBASE_MANAGES_ZK=false
export HBASE_MASTER_OPTS="-Djava.security.auth.login.config=$HBASE_SERVER_CONF"
export HBASE_REGIONSERVER_OPTS="-Djava.security.auth.login.config=$HBASE_SERVER_CONF"
----
Modify your _hbase-site.xml_ on each node that will run a master or regionserver to contain:
[source,xml]
----
<configuration>
<property>
<name>hbase.zookeeper.quorum</name>
<value>$ZK_NODES</value>
</property>
<property>
<name>hbase.cluster.distributed</name>
<value>true</value>
</property>
<property>
<name>hbase.zookeeper.property.authProvider.1</name>
<value>org.apache.zookeeper.server.auth.SASLAuthenticationProvider</value>
</property>
<property>
<name>hbase.zookeeper.property.kerberos.removeHostFromPrincipal</name>
<value>true</value>
</property>
<property>
<name>hbase.zookeeper.property.kerberos.removeRealmFromPrincipal</name>
<value>true</value>
</property>
</configuration>
----
where `$ZK_NODES` is the comma-separated list of hostnames of the ZooKeeper Quorum hosts.
Also on each of these hosts, create a JAAS configuration file containing:
[source,java]
----
Server {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
keyTab="$PATH_TO_ZOOKEEPER_KEYTAB"
storeKey=true
useTicketCache=false
principal="zookeeper/$HOST";
};
----
where `$HOST` is the hostname of each Quorum host.
We will refer to the full pathname of this file as _$ZK_SERVER_CONF_ below.
Start your ZooKeepers on each ZooKeeper Quorum host with:
[source,bourne]
----
SERVER_JVMFLAGS="-Djava.security.auth.login.config=$ZK_SERVER_CONF" bin/zkServer start
----
Start your HBase cluster by running one or more of the following set of commands on the appropriate nodes:
----
bin/hbase master start
bin/hbase regionserver start
----
=== ZooKeeper Server Authentication Log Output
If the configuration above is successful, you should see something similar to the following in your ZooKeeper server logs:
----
11/12/05 22:43:39 INFO zookeeper.Login: successfully logged in.
11/12/05 22:43:39 INFO server.NIOServerCnxnFactory: binding to port 0.0.0.0/0.0.0.0:2181
11/12/05 22:43:39 INFO zookeeper.Login: TGT refresh thread started.
11/12/05 22:43:39 INFO zookeeper.Login: TGT valid starting at: Mon Dec 05 22:43:39 UTC 2011
11/12/05 22:43:39 INFO zookeeper.Login: TGT expires: Tue Dec 06 22:43:39 UTC 2011
11/12/05 22:43:39 INFO zookeeper.Login: TGT refresh sleeping until: Tue Dec 06 18:36:42 UTC 2011
..
11/12/05 22:43:59 INFO auth.SaslServerCallbackHandler:
Successfully authenticated client: authenticationID=hbase/ip-10-166-175-249.us-west-1.compute.internal@HADOOP.LOCALDOMAIN;
authorizationID=hbase/ip-10-166-175-249.us-west-1.compute.internal@HADOOP.LOCALDOMAIN.
11/12/05 22:43:59 INFO auth.SaslServerCallbackHandler: Setting authorizedID: hbase
11/12/05 22:43:59 INFO server.ZooKeeperServer: adding SASL authorization for authorizationID: hbase
----
=== ZooKeeper Client Authentication Log Output
On the ZooKeeper client side (HBase master or regionserver), you should see something similar to the following:
----
11/12/05 22:43:59 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=ip-10-166-175-249.us-west-1.compute.internal:2181 sessionTimeout=180000 watcher=master:60000
11/12/05 22:43:59 INFO zookeeper.ClientCnxn: Opening socket connection to server /10.166.175.249:2181
11/12/05 22:43:59 INFO zookeeper.RecoverableZooKeeper: The identifier of this process is 14851@ip-10-166-175-249
11/12/05 22:43:59 INFO zookeeper.Login: successfully logged in.
11/12/05 22:43:59 INFO client.ZooKeeperSaslClient: Client will use GSSAPI as SASL mechanism.
11/12/05 22:43:59 INFO zookeeper.Login: TGT refresh thread started.
11/12/05 22:43:59 INFO zookeeper.ClientCnxn: Socket connection established to ip-10-166-175-249.us-west-1.compute.internal/10.166.175.249:2181, initiating session
11/12/05 22:43:59 INFO zookeeper.Login: TGT valid starting at: Mon Dec 05 22:43:59 UTC 2011
11/12/05 22:43:59 INFO zookeeper.Login: TGT expires: Tue Dec 06 22:43:59 UTC 2011
11/12/05 22:43:59 INFO zookeeper.Login: TGT refresh sleeping until: Tue Dec 06 18:30:37 UTC 2011
11/12/05 22:43:59 INFO zookeeper.ClientCnxn: Session establishment complete on server ip-10-166-175-249.us-west-1.compute.internal/10.166.175.249:2181, sessionid = 0x134106594320000, negotiated timeout = 180000
----
=== Configuration from Scratch
This has been tested on the current standard Amazon Linux AMI.
First setup KDC and principals as described above.
Next checkout code and run a sanity check.
----
git clone https://gitbox.apache.org/repos/asf/hbase.git
cd hbase
mvn clean test -Dtest=TestZooKeeperACL
----
Then configure HBase as described above.
Manually edit target/cached_classpath.txt (see below):
----
bin/hbase zookeeper &
bin/hbase master &
bin/hbase regionserver &
----
=== Future improvements
==== Fix target/cached_classpath.txt
You must override the standard hadoop-core jar file from the `target/cached_classpath.txt` file with the version containing the HADOOP-7070 fix.
You can use the following script to do this:
----
echo `find ~/.m2 -name "*hadoop-core*7070*SNAPSHOT.jar"` ':' `cat target/cached_classpath.txt` | sed 's/ //g' > target/tmp.txt
mv target/tmp.txt target/cached_classpath.txt
----
==== Set JAAS configuration programmatically
This would avoid the need for a separate Hadoop jar that fixes link:https://issues.apache.org/jira/browse/HADOOP-7070[HADOOP-7070].
==== Elimination of `kerberos.removeHostFromPrincipal` and`kerberos.removeRealmFromPrincipal`
ifdef::backend-docbook[]
[index]
= Index
// Generated automatically by the DocBook toolchain.
endif::backend-docbook[]

View File

@ -1,105 +0,0 @@
////
/**
*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
////
= Apache HBase (TM) Reference Guide
:Author: Apache HBase Team
:Email: <hbase-dev@lists.apache.org>
:doctype: book
:Version: {docVersion}
:revnumber: {docVersion}
// Logo for PDF -- doesn't render in HTML
:title-logo-image: image:hbase_logo_with_orca.png[pdfwidth=4.25in,align=center]
:numbered:
:toc: left
:toclevels: 1
:toc-title: Contents
:sectanchors:
:icons: font
:iconsdir: icons
:linkcss:
:experimental:
:source-language: java
:leveloffset: 0
:stem:
// Logo for HTML -- doesn't render in PDF
ifdef::backend-html5[]
++++
<div>
<a href="https://hbase.apache.org"><img src="images/hbase_logo_with_orca.png" alt="Apache HBase Logo" /></a>
</div>
++++
endif::[]
// The directory is called _chapters because asciidoctor skips direct
// processing of files found in directories starting with an _. This
// prevents each chapter being built as its own book.
include::_chapters/preface.adoc[]
include::_chapters/getting_started.adoc[]
include::_chapters/configuration.adoc[]
include::_chapters/upgrading.adoc[]
include::_chapters/shell.adoc[]
include::_chapters/datamodel.adoc[]
include::_chapters/schema_design.adoc[]
include::_chapters/mapreduce.adoc[]
include::_chapters/security.adoc[]
include::_chapters/architecture.adoc[]
include::_chapters/hbase_mob.adoc[]
include::_chapters/snapshot_scanner.adoc[]
include::_chapters/inmemory_compaction.adoc[]
include::_chapters/offheap_read_write.adoc[]
include::_chapters/hbase_apis.adoc[]
include::_chapters/external_apis.adoc[]
include::_chapters/thrift_filter_language.adoc[]
include::_chapters/spark.adoc[]
include::_chapters/cp.adoc[]
include::_chapters/performance.adoc[]
include::_chapters/profiler.adoc[]
include::_chapters/troubleshooting.adoc[]
include::_chapters/case_studies.adoc[]
include::_chapters/ops_mgt.adoc[]
include::_chapters/developer.adoc[]
include::_chapters/unit_testing.adoc[]
include::_chapters/protobuf.adoc[]
include::_chapters/pv2.adoc[]
include::_chapters/amv2.adoc[]
include::_chapters/zookeeper.adoc[]
include::_chapters/community.adoc[]
include::_chapters/hbtop.adoc[]
include::_chapters/tracing.adoc[]
= Appendix
include::_chapters/appendix_contributing_to_documentation.adoc[]
include::_chapters/faq.adoc[]
include::_chapters/appendix_acl_matrix.adoc[]
include::_chapters/compression.adoc[]
include::_chapters/sql.adoc[]
include::_chapters/ycsb.adoc[]
include::_chapters/appendix_hfile_format.adoc[]
include::_chapters/other_info.adoc[]
include::_chapters/hbase_history.adoc[]
include::_chapters/asf.adoc[]
include::_chapters/orca.adoc[]
include::_chapters/rpc.adoc[]
include::_chapters/appendix_hbase_incompatibilities.adoc[]

View File

@ -1,400 +0,0 @@
/* Asciidoctor default stylesheet | MIT License | http://asciidoctor.org */
/* Remove the comments around the @import statement below when using this as a custom stylesheet */
/*@import "https://fonts.googleapis.com/css?family=Open+Sans:300,300italic,400,400italic,600,600italic%7CNoto+Serif:400,400italic,700,700italic%7CDroid+Sans+Mono:400";*/
article,aside,details,figcaption,figure,footer,header,hgroup,main,nav,section,summary{display:block}
audio,canvas,video{display:inline-block}
audio:not([controls]){display:none;height:0}
[hidden],template{display:none}
script{display:none!important}
html{font-family:sans-serif;-ms-text-size-adjust:100%;-webkit-text-size-adjust:100%}
body{margin:0}
a{background:transparent}
a:focus{outline:thin dotted}
a:active,a:hover{outline:0}
h1{font-size:2em;margin:.67em 0}
abbr[title]{border-bottom:1px dotted}
b,strong{font-weight:bold}
dfn{font-style:italic}
hr{-moz-box-sizing:content-box;box-sizing:content-box;height:0}
mark{background:#ff0;color:#000}
code,kbd,pre,samp{font-family:monospace;font-size:1em}
pre{white-space:pre-wrap}
q{quotes:"\201C" "\201D" "\2018" "\2019"}
small{font-size:80%}
sub,sup{font-size:75%;line-height:0;position:relative;vertical-align:baseline}
sup{top:-.5em}
sub{bottom:-.25em}
img{border:0}
svg:not(:root){overflow:hidden}
figure{margin:0}
fieldset{border:1px solid silver;margin:0 2px;padding:.35em .625em .75em}
legend{border:0;padding:0}
button,input,select,textarea{font-family:inherit;font-size:100%;margin:0}
button,input{line-height:normal}
button,select{text-transform:none}
button,html input[type="button"],input[type="reset"],input[type="submit"]{-webkit-appearance:button;cursor:pointer}
button[disabled],html input[disabled]{cursor:default}
input[type="checkbox"],input[type="radio"]{box-sizing:border-box;padding:0}
input[type="search"]{-webkit-appearance:textfield;-moz-box-sizing:content-box;-webkit-box-sizing:content-box;box-sizing:content-box}
input[type="search"]::-webkit-search-cancel-button,input[type="search"]::-webkit-search-decoration{-webkit-appearance:none}
button::-moz-focus-inner,input::-moz-focus-inner{border:0;padding:0}
textarea{overflow:auto;vertical-align:top}
table{border-collapse:collapse;border-spacing:0}
*,*:before,*:after{-moz-box-sizing:border-box;-webkit-box-sizing:border-box;box-sizing:border-box}
html,body{font-size:100%}
body{background:#fff;color:rgba(0,0,0,.8);padding:0;margin:0;font-family:"Noto Serif","DejaVu Serif",serif;font-weight:400;font-style:normal;line-height:1;position:relative;cursor:auto}
a:hover{cursor:pointer}
img,object,embed{max-width:100%;height:auto}
object,embed{height:100%}
img{-ms-interpolation-mode:bicubic}
#map_canvas img,#map_canvas embed,#map_canvas object,.map_canvas img,.map_canvas embed,.map_canvas object{max-width:none!important}
.left{float:left!important}
.right{float:right!important}
.text-left{text-align:left!important}
.text-right{text-align:right!important}
.text-center{text-align:center!important}
.text-justify{text-align:justify!important}
.hide{display:none}
.antialiased,body{-webkit-font-smoothing:antialiased}
img{display:inline-block;vertical-align:middle}
textarea{height:auto;min-height:50px}
select{width:100%}
p.lead,.paragraph.lead>p,#preamble>.sectionbody>.paragraph:first-of-type p{font-size:1.21875em;line-height:1.6}
.subheader,.admonitionblock td.content>.title,.audioblock>.title,.exampleblock>.title,.imageblock>.title,.listingblock>.title,.literalblock>.title,.stemblock>.title,.openblock>.title,.paragraph>.title,.quoteblock>.title,table.tableblock>.title,.verseblock>.title,.videoblock>.title,.dlist>.title,.olist>.title,.ulist>.title,.qlist>.title,.hdlist>.title{line-height:1.45;color:#7a2518;font-weight:400;margin-top:0;margin-bottom:.25em}
div,dl,dt,dd,ul,ol,li,h1,h2,h3,#toctitle,.sidebarblock>.content>.title,h4,h5,h6,pre,form,p,blockquote,th,td{margin:0;padding:0;direction:ltr}
a{color:#2156a5;text-decoration:underline;line-height:inherit}
a:hover,a:focus{color:#1d4b8f}
a img{border:none}
p{font-family:inherit;font-weight:400;font-size:1em;line-height:1.6;margin-bottom:1.25em;text-rendering:optimizeLegibility}
p aside{font-size:.875em;line-height:1.35;font-style:italic}
h1,h2,h3,#toctitle,.sidebarblock>.content>.title,h4,h5,h6{font-family:"Open Sans","DejaVu Sans",sans-serif;font-weight:300;font-style:normal;color:#990000;text-rendering:optimizeLegibility;margin-top:1em;margin-bottom:.5em;line-height:1.0125em}
h1 small,h2 small,h3 small,#toctitle small,.sidebarblock>.content>.title small,h4 small,h5 small,h6 small{font-size:60%;color:#e99b8f;line-height:0}
h1{font-size:2.125em}
h2{font-size:1.6875em}
h3,#toctitle,.sidebarblock>.content>.title{font-size:1.375em}
h4,h5{font-size:1.125em}
h6{font-size:1em}
hr{border:solid #ddddd8;border-width:1px 0 0;clear:both;margin:1.25em 0 1.1875em;height:0}
em,i{font-style:italic;line-height:inherit}
strong,b{font-weight:bold;line-height:inherit}
small{font-size:60%;line-height:inherit}
code{font-family:"Droid Sans Mono","DejaVu Sans Mono",monospace;font-weight:400;color:rgba(0,0,0,.9)}
ul,ol,dl{font-size:1em;line-height:1.6;margin-bottom:1.25em;list-style-position:outside;font-family:inherit}
ul,ol,ul.no-bullet,ol.no-bullet{margin-left:1.5em}
ul li ul,ul li ol{margin-left:1.25em;margin-bottom:0;font-size:1em}
ul.square li ul,ul.circle li ul,ul.disc li ul{list-style:inherit}
ul.square{list-style-type:square}
ul.circle{list-style-type:circle}
ul.disc{list-style-type:disc}
ul.no-bullet{list-style:none}
ol li ul,ol li ol{margin-left:1.25em;margin-bottom:0}
dl dt{margin-bottom:.3125em;font-weight:bold}
dl dd{margin-bottom:1.25em}
abbr,acronym{text-transform:uppercase;font-size:90%;color:rgba(0,0,0,.8);border-bottom:1px dotted #ddd;cursor:help}
abbr{text-transform:none}
blockquote{margin:0 0 1.25em;padding:.5625em 1.25em 0 1.1875em;border-left:1px solid #ddd}
blockquote cite{display:block;font-size:.9375em;color:rgba(0,0,0,.6)}
blockquote cite:before{content:"\2014 \0020"}
blockquote cite a,blockquote cite a:visited{color:rgba(0,0,0,.6)}
blockquote,blockquote p{line-height:1.6;color:rgba(0,0,0,.85)}
@media only screen and (min-width:768px){h1,h2,h3,#toctitle,.sidebarblock>.content>.title,h4,h5,h6{line-height:1.2}
h1{font-size:2.75em}
h2{font-size:2.3125em}
h3,#toctitle,.sidebarblock>.content>.title{font-size:1.6875em}
h4{font-size:1.4375em}}table{background:#fff;margin-bottom:1.25em;border:solid 1px #dedede}
table thead,table tfoot{background:#f7f8f7;font-weight:bold}
table thead tr th,table thead tr td,table tfoot tr th,table tfoot tr td{padding:.5em .625em .625em;font-size:inherit;color:rgba(0,0,0,.8);text-align:left}
table tr th,table tr td{padding:.5625em .625em;font-size:inherit;color:rgba(0,0,0,.8)}
table tr.even,table tr.alt,table tr:nth-of-type(even){background:#f8f8f7}
table thead tr th,table tfoot tr th,table tbody tr td,table tr td,table tfoot tr td{display:table-cell;line-height:1.6}
h1,h2,h3,#toctitle,.sidebarblock>.content>.title,h4,h5,h6{line-height:1.2;word-spacing:-.05em}
h1 strong,h2 strong,h3 strong,#toctitle strong,.sidebarblock>.content>.title strong,h4 strong,h5 strong,h6 strong{font-weight:400}
.clearfix:before,.clearfix:after,.float-group:before,.float-group:after{content:" ";display:table}
.clearfix:after,.float-group:after{clear:both}
*:not(pre)>code{font-size:.9375em;font-style:normal!important;letter-spacing:0;padding:.1em .5ex;word-spacing:-.15em;background-color:#f7f7f8;-webkit-border-radius:4px;border-radius:4px;line-height:1.45;text-rendering:optimizeSpeed}
pre,pre>code{line-height:1.45;color:rgba(0,0,0,.9);font-family:"Droid Sans Mono","DejaVu Sans Mono",monospace;font-weight:400;text-rendering:optimizeSpeed}
.keyseq{color:rgba(51,51,51,.8)}
kbd{display:inline-block;color:rgba(0,0,0,.8);font-size:.75em;line-height:1.4;background-color:#f7f7f7;border:1px solid #ccc;-webkit-border-radius:3px;border-radius:3px;-webkit-box-shadow:0 1px 0 rgba(0,0,0,.2),0 0 0 .1em white inset;box-shadow:0 1px 0 rgba(0,0,0,.2),0 0 0 .1em #fff inset;margin:-.15em .15em 0 .15em;padding:.2em .6em .2em .5em;vertical-align:middle;white-space:nowrap}
.keyseq kbd:first-child{margin-left:0}
.keyseq kbd:last-child{margin-right:0}
.menuseq,.menu{color:rgba(0,0,0,.8)}
b.button:before,b.button:after{position:relative;top:-1px;font-weight:400}
b.button:before{content:"[";padding:0 3px 0 2px}
b.button:after{content:"]";padding:0 2px 0 3px}
p a>code:hover{color:rgba(0,0,0,.9)}
#header,#content,#footnotes,#footer{width:100%;margin-left:auto;margin-right:auto;margin-top:0;margin-bottom:0;max-width:62.5em;*zoom:1;position:relative;padding-left:.9375em;padding-right:.9375em}
#header:before,#header:after,#content:before,#content:after,#footnotes:before,#footnotes:after,#footer:before,#footer:after{content:" ";display:table}
#header:after,#content:after,#footnotes:after,#footer:after{clear:both}
#content{margin-top:1.25em}
#content:before{content:none}
#header>h1:first-child{color:rgba(0,0,0,.85);margin-top:2.25rem;margin-bottom:0}
#header>h1:first-child+#toc{margin-top:8px;border-top:1px solid #ddddd8}
#header>h1:only-child,body.toc2 #header>h1:nth-last-child(2){border-bottom:1px solid #ddddd8;padding-bottom:8px}
#header .details{border-bottom:1px solid #ddddd8;line-height:1.45;padding-top:.25em;padding-bottom:.25em;padding-left:.25em;color:rgba(0,0,0,.6);display:-ms-flexbox;display:-webkit-flex;display:flex;-ms-flex-flow:row wrap;-webkit-flex-flow:row wrap;flex-flow:row wrap}
#header .details span:first-child{margin-left:-.125em}
#header .details span.email a{color:rgba(0,0,0,.85)}
#header .details br{display:none}
#header .details br+span:before{content:"\00a0\2013\00a0"}
#header .details br+span.author:before{content:"\00a0\22c5\00a0";color:rgba(0,0,0,.85)}
#header .details br+span#revremark:before{content:"\00a0|\00a0"}
#header #revnumber{text-transform:capitalize}
#header #revnumber:after{content:"\00a0"}
#content>h1:first-child:not([class]){color:rgba(0,0,0,.85);border-bottom:1px solid #ddddd8;padding-bottom:8px;margin-top:0;padding-top:1rem;margin-bottom:1.25rem}
#toc{border-bottom:1px solid #efefed;padding-bottom:.5em}
#toc>ul{margin-left:.125em}
#toc ul.sectlevel0>li>a{font-style:italic}
#toc ul.sectlevel0 ul.sectlevel1{margin:.5em 0}
#toc ul{font-family:"Open Sans","DejaVu Sans",sans-serif;list-style-type:none}
#toc a{text-decoration:none}
#toc a:active{text-decoration:underline}
#toctitle{color:#7a2518;font-size:1.2em}
@media only screen and (min-width:768px){#toctitle{font-size:1.375em}
body.toc2{padding-left:15em;padding-right:0}
#toc.toc2{margin-top:0!important;background-color:#f8f8f7;position:fixed;width:15em;left:0;top:0;border-right:1px solid #efefed;border-top-width:0!important;border-bottom-width:0!important;z-index:1000;padding:1.25em 1em;height:100%;overflow:auto}
#toc.toc2 #toctitle{margin-top:0;font-size:1.2em}
#toc.toc2>ul{font-size:.9em;margin-bottom:0}
#toc.toc2 ul ul{margin-left:0;padding-left:1em}
#toc.toc2 ul.sectlevel0 ul.sectlevel1{padding-left:0;margin-top:.5em;margin-bottom:.5em}
body.toc2.toc-right{padding-left:0;padding-right:15em}
body.toc2.toc-right #toc.toc2{border-right-width:0;border-left:1px solid #efefed;left:auto;right:0}}@media only screen and (min-width:1280px){body.toc2{padding-left:20em;padding-right:0}
#toc.toc2{width:20em}
#toc.toc2 #toctitle{font-size:1.375em}
#toc.toc2>ul{font-size:.95em}
#toc.toc2 ul ul{padding-left:1.25em}
body.toc2.toc-right{padding-left:0;padding-right:20em}}#content #toc{border-style:solid;border-width:1px;border-color:#e0e0dc;margin-bottom:1.25em;padding:1.25em;background:#f8f8f7;-webkit-border-radius:4px;border-radius:4px}
#content #toc>:first-child{margin-top:0}
#content #toc>:last-child{margin-bottom:0}
#footer{max-width:100%;background-color:rgba(0,0,0,.8);padding:1.25em}
#footer-text,#footer_nav{color:rgba(255,255,255,.8);line-height:1.44}
#footer a{color: #990000}
.sect1{padding-bottom:.625em}
@media only screen and (min-width:768px){.sect1{padding-bottom:1.25em}}.sect1+.sect1{border-top:1px solid #efefed}
#content h1>a.anchor,h2>a.anchor,h3>a.anchor,#toctitle>a.anchor,.sidebarblock>.content>.title>a.anchor,h4>a.anchor,h5>a.anchor,h6>a.anchor{position:absolute;z-index:1001;width:1.5ex;margin-left:-1.5ex;display:block;text-decoration:none!important;visibility:hidden;text-align:center;font-weight:400}
#content h1>a.anchor:before,h2>a.anchor:before,h3>a.anchor:before,#toctitle>a.anchor:before,.sidebarblock>.content>.title>a.anchor:before,h4>a.anchor:before,h5>a.anchor:before,h6>a.anchor:before{content:"\00A7";font-size:.85em;display:block;padding-top:.1em}
#content h1:hover>a.anchor,#content h1>a.anchor:hover,h2:hover>a.anchor,h2>a.anchor:hover,h3:hover>a.anchor,#toctitle:hover>a.anchor,.sidebarblock>.content>.title:hover>a.anchor,h3>a.anchor:hover,#toctitle>a.anchor:hover,.sidebarblock>.content>.title>a.anchor:hover,h4:hover>a.anchor,h4>a.anchor:hover,h5:hover>a.anchor,h5>a.anchor:hover,h6:hover>a.anchor,h6>a.anchor:hover{visibility:visible}
#content h1>a.link,h2>a.link,h3>a.link,#toctitle>a.link,.sidebarblock>.content>.title>a.link,h4>a.link,h5>a.link,h6>a.link{color:#990000;text-decoration:none}
#content h1>a.link:hover,h2>a.link:hover,h3>a.link:hover,#toctitle>a.link:hover,.sidebarblock>.content>.title>a.link:hover,h4>a.link:hover,h5>a.link:hover,h6>a.link:hover{color:#a53221}
.audioblock,.imageblock,.literalblock,.listingblock,.stemblock,.videoblock{margin-bottom:1.25em}
.admonitionblock td.content>.title,.audioblock>.title,.exampleblock>.title,.imageblock>.title,.listingblock>.title,.literalblock>.title,.stemblock>.title,.openblock>.title,.paragraph>.title,.quoteblock>.title,table.tableblock>.title,.verseblock>.title,.videoblock>.title,.dlist>.title,.olist>.title,.ulist>.title,.qlist>.title,.hdlist>.title{text-rendering:optimizeLegibility;text-align:left;font-family:"Noto Serif","DejaVu Serif",serif;font-size:1rem;font-style:italic}
table.tableblock>caption.title{white-space:nowrap;overflow:visible;max-width:0}
.paragraph.lead>p,#preamble>.sectionbody>.paragraph:first-of-type p{color:rgba(0,0,0,.85)}
table.tableblock #preamble>.sectionbody>.paragraph:first-of-type p{font-size:inherit}
.admonitionblock>table{border-collapse:separate;border:0;background:none;width:100%}
.admonitionblock>table td.icon{text-align:center;width:80px}
.admonitionblock>table td.icon img{max-width:none}
.admonitionblock>table td.icon .title{font-weight:bold;font-family:"Open Sans","DejaVu Sans",sans-serif;text-transform:uppercase}
.admonitionblock>table td.content{padding-left:1.125em;padding-right:1.25em;border-left:1px solid #ddddd8;color:rgba(0,0,0,.6)}
.admonitionblock>table td.content>:last-child>:last-child{margin-bottom:0}
.exampleblock>.content{border-style:solid;border-width:1px;border-color:#e6e6e6;margin-bottom:1.25em;padding:1.25em;background:#fff;-webkit-border-radius:4px;border-radius:4px}
.exampleblock>.content>:first-child{margin-top:0}
.exampleblock>.content>:last-child{margin-bottom:0}
.sidebarblock{border-style:solid;border-width:1px;border-color:#e0e0dc;margin-bottom:1.25em;padding:1.25em;background:#f8f8f7;-webkit-border-radius:4px;border-radius:4px}
.sidebarblock>:first-child{margin-top:0}
.sidebarblock>:last-child{margin-bottom:0}
.sidebarblock>.content>.title{color:#7a2518;margin-top:0;text-align:center}
.exampleblock>.content>:last-child>:last-child,.exampleblock>.content .olist>ol>li:last-child>:last-child,.exampleblock>.content .ulist>ul>li:last-child>:last-child,.exampleblock>.content .qlist>ol>li:last-child>:last-child,.sidebarblock>.content>:last-child>:last-child,.sidebarblock>.content .olist>ol>li:last-child>:last-child,.sidebarblock>.content .ulist>ul>li:last-child>:last-child,.sidebarblock>.content .qlist>ol>li:last-child>:last-child{margin-bottom:0}
.literalblock pre,.listingblock pre:not(.highlight),.listingblock pre[class="highlight"],.listingblock pre[class^="highlight "],.listingblock pre.CodeRay,.listingblock pre.prettyprint{background:#f7f7f8}
.sidebarblock .literalblock pre,.sidebarblock .listingblock pre:not(.highlight),.sidebarblock .listingblock pre[class="highlight"],.sidebarblock .listingblock pre[class^="highlight "],.sidebarblock .listingblock pre.CodeRay,.sidebarblock .listingblock pre.prettyprint{background:#f2f1f1}
.literalblock pre,.literalblock pre[class],.listingblock pre,.listingblock pre[class]{-webkit-border-radius:4px;border-radius:4px;word-wrap:break-word;padding:1em;font-size:.8125em}
.literalblock pre.nowrap,.literalblock pre[class].nowrap,.listingblock pre.nowrap,.listingblock pre[class].nowrap{overflow-x:auto;white-space:pre;word-wrap:normal}
@media only screen and (min-width:768px){.literalblock pre,.literalblock pre[class],.listingblock pre,.listingblock pre[class]{font-size:.90625em}}@media only screen and (min-width:1280px){.literalblock pre,.literalblock pre[class],.listingblock pre,.listingblock pre[class]{font-size:1em}}.literalblock.output pre{color:#f7f7f8;background-color:rgba(0,0,0,.9)}
.listingblock pre.highlightjs{padding:0}
.listingblock pre.highlightjs>code{padding:1em;-webkit-border-radius:4px;border-radius:4px}
.listingblock pre.prettyprint{border-width:0}
.listingblock>.content{position:relative}
.listingblock code[data-lang]:before{display:none;content:attr(data-lang);position:absolute;font-size:.75em;top:.425rem;right:.5rem;line-height:1;text-transform:uppercase;color:#999}
.listingblock:hover code[data-lang]:before{display:block}
.listingblock.terminal pre .command:before{content:attr(data-prompt);padding-right:.5em;color:#999}
.listingblock.terminal pre .command:not([data-prompt]):before{content:"$"}
table.pyhltable{border-collapse:separate;border:0;margin-bottom:0;background:none}
table.pyhltable td{vertical-align:top;padding-top:0;padding-bottom:0}
table.pyhltable td.code{padding-left:.75em;padding-right:0}
pre.pygments .lineno,table.pyhltable td:not(.code){color:#999;padding-left:0;padding-right:.5em;border-right:1px solid #ddddd8}
pre.pygments .lineno{display:inline-block;margin-right:.25em}
table.pyhltable .linenodiv{background:none!important;padding-right:0!important}
.quoteblock{margin:0 1em 1.25em 1.5em;display:table}
.quoteblock>.title{margin-left:-1.5em;margin-bottom:.75em}
.quoteblock blockquote,.quoteblock blockquote p{color:rgba(0,0,0,.85);font-size:1.15rem;line-height:1.75;word-spacing:.1em;letter-spacing:0;font-style:italic;text-align:justify}
.quoteblock blockquote{margin:0;padding:0;border:0}
.quoteblock blockquote:before{content:"\201c";float:left;font-size:2.75em;font-weight:bold;line-height:.6em;margin-left:-.6em;color:#7a2518;text-shadow:0 1px 2px rgba(0,0,0,.1)}
.quoteblock blockquote>.paragraph:last-child p{margin-bottom:0}
.quoteblock .attribution{margin-top:.5em;margin-right:.5ex;text-align:right}
.quoteblock .quoteblock{margin-left:0;margin-right:0;padding:.5em 0;border-left:3px solid rgba(0,0,0,.6)}
.quoteblock .quoteblock blockquote{padding:0 0 0 .75em}
.quoteblock .quoteblock blockquote:before{display:none}
.verseblock{margin:0 1em 1.25em 1em}
.verseblock pre{font-family:"Open Sans","DejaVu Sans",sans;font-size:1.15rem;color:rgba(0,0,0,.85);font-weight:300;text-rendering:optimizeLegibility}
.verseblock pre strong{font-weight:400}
.verseblock .attribution{margin-top:1.25rem;margin-left:.5ex}
.quoteblock .attribution,.verseblock .attribution{font-size:.9375em;line-height:1.45;font-style:italic}
.quoteblock .attribution br,.verseblock .attribution br{display:none}
.quoteblock .attribution cite,.verseblock .attribution cite{display:block;letter-spacing:-.05em;color:rgba(0,0,0,.6)}
.quoteblock.abstract{margin:0 0 1.25em 0;display:block}
.quoteblock.abstract blockquote,.quoteblock.abstract blockquote p{text-align:left;word-spacing:0}
.quoteblock.abstract blockquote:before,.quoteblock.abstract blockquote p:first-of-type:before{display:none}
table.tableblock{max-width:100%;border-collapse:separate}
table.tableblock td>.paragraph:last-child p>p:last-child,table.tableblock th>p:last-child,table.tableblock td>p:last-child{margin-bottom:0}
table.spread{width:100%}
table.tableblock,th.tableblock,td.tableblock{border:0 solid #dedede}
table.grid-all th.tableblock,table.grid-all td.tableblock{border-width:0 1px 1px 0}
table.grid-all tfoot>tr>th.tableblock,table.grid-all tfoot>tr>td.tableblock{border-width:1px 1px 0 0}
table.grid-cols th.tableblock,table.grid-cols td.tableblock{border-width:0 1px 0 0}
table.grid-all *>tr>.tableblock:last-child,table.grid-cols *>tr>.tableblock:last-child{border-right-width:0}
table.grid-rows th.tableblock,table.grid-rows td.tableblock{border-width:0 0 1px 0}
table.grid-all tbody>tr:last-child>th.tableblock,table.grid-all tbody>tr:last-child>td.tableblock,table.grid-all thead:last-child>tr>th.tableblock,table.grid-rows tbody>tr:last-child>th.tableblock,table.grid-rows tbody>tr:last-child>td.tableblock,table.grid-rows thead:last-child>tr>th.tableblock{border-bottom-width:0}
table.grid-rows tfoot>tr>th.tableblock,table.grid-rows tfoot>tr>td.tableblock{border-width:1px 0 0 0}
table.frame-all{border-width:1px}
table.frame-sides{border-width:0 1px}
table.frame-topbot{border-width:1px 0}
th.halign-left,td.halign-left{text-align:left}
th.halign-right,td.halign-right{text-align:right}
th.halign-center,td.halign-center{text-align:center}
th.valign-top,td.valign-top{vertical-align:top}
th.valign-bottom,td.valign-bottom{vertical-align:bottom}
th.valign-middle,td.valign-middle{vertical-align:middle}
table thead th,table tfoot th{font-weight:bold}
tbody tr th{display:table-cell;line-height:1.6;background:#f7f8f7}
tbody tr th,tbody tr th p,tfoot tr th,tfoot tr th p{color:rgba(0,0,0,.8);font-weight:bold}
p.tableblock>code:only-child{background:none;padding:0}
p.tableblock{font-size:1em}
td>div.verse{white-space:pre}
ol{margin-left:1.75em}
ul li ol{margin-left:1.5em}
dl dd{margin-left:1.125em}
dl dd:last-child,dl dd:last-child>:last-child{margin-bottom:0}
ol>li p,ul>li p,ul dd,ol dd,.olist .olist,.ulist .ulist,.ulist .olist,.olist .ulist{margin-bottom:.625em}
ul.unstyled,ol.unnumbered,ul.checklist,ul.none{list-style-type:none}
ul.unstyled,ol.unnumbered,ul.checklist{margin-left:.625em}
ul.checklist li>p:first-child>.fa-square-o:first-child,ul.checklist li>p:first-child>.fa-check-square-o:first-child{width:1em;font-size:.85em}
ul.checklist li>p:first-child>input[type="checkbox"]:first-child{width:1em;position:relative;top:1px}
ul.inline{margin:0 auto .625em auto;margin-left:-1.375em;margin-right:0;padding:0;list-style:none;overflow:hidden}
ul.inline>li{list-style:none;float:left;margin-left:1.375em;display:block}
ul.inline>li>*{display:block}
.unstyled dl dt{font-weight:400;font-style:normal}
ol.arabic{list-style-type:decimal}
ol.decimal{list-style-type:decimal-leading-zero}
ol.loweralpha{list-style-type:lower-alpha}
ol.upperalpha{list-style-type:upper-alpha}
ol.lowerroman{list-style-type:lower-roman}
ol.upperroman{list-style-type:upper-roman}
ol.lowergreek{list-style-type:lower-greek}
.hdlist>table,.colist>table{border:0;background:none}
.hdlist>table>tbody>tr,.colist>table>tbody>tr{background:none}
td.hdlist1{padding-right:.75em;font-weight:bold}
td.hdlist1,td.hdlist2{vertical-align:top}
.literalblock+.colist,.listingblock+.colist{margin-top:-.5em}
.colist>table tr>td:first-of-type{padding:0 .75em;line-height:1}
.colist>table tr>td:last-of-type{padding:.25em 0}
.thumb,.th{line-height:0;display:inline-block;border:solid 4px #fff;-webkit-box-shadow:0 0 0 1px #ddd;box-shadow:0 0 0 1px #ddd}
.imageblock.left,.imageblock[style*="float: left"]{margin:.25em .625em 1.25em 0}
.imageblock.right,.imageblock[style*="float: right"]{margin:.25em 0 1.25em .625em}
.imageblock>.title{margin-bottom:0}
.imageblock.thumb,.imageblock.th{border-width:6px}
.imageblock.thumb>.title,.imageblock.th>.title{padding:0 .125em}
.image.left,.image.right{margin-top:.25em;margin-bottom:.25em;display:inline-block;line-height:0}
.image.left{margin-right:.625em}
.image.right{margin-left:.625em}
a.image{text-decoration:none}
span.footnote,span.footnoteref{vertical-align:super;font-size:.875em}
span.footnote a,span.footnoteref a{text-decoration:none}
span.footnote a:active,span.footnoteref a:active{text-decoration:underline}
#footnotes{padding-top:.75em;padding-bottom:.75em;margin-bottom:.625em}
#footnotes hr{width:20%;min-width:6.25em;margin:-.25em 0 .75em 0;border-width:1px 0 0 0}
#footnotes .footnote{padding:0 .375em;line-height:1.3;font-size:.875em;margin-left:1.2em;text-indent:-1.2em;margin-bottom:.2em}
#footnotes .footnote a:first-of-type{font-weight:bold;text-decoration:none}
#footnotes .footnote:last-of-type{margin-bottom:0}
#content #footnotes{margin-top:-.625em;margin-bottom:0;padding:.75em 0}
.gist .file-data>table{border:0;background:#fff;width:100%;margin-bottom:0}
.gist .file-data>table td.line-data{width:99%}
div.unbreakable{page-break-inside:avoid}
.big{font-size:larger}
.small{font-size:smaller}
.underline{text-decoration:underline}
.overline{text-decoration:overline}
.line-through{text-decoration:line-through}
.aqua{color:#00bfbf}
.aqua-background{background-color:#00fafa}
.black{color:#000}
.black-background{background-color:#000}
.blue{color:#0000bf}
.blue-background{background-color:#0000fa}
.fuchsia{color:#bf00bf}
.fuchsia-background{background-color:#fa00fa}
.gray{color:#606060}
.gray-background{background-color:#7d7d7d}
.green{color:#006000}
.green-background{background-color:#007d00}
.lime{color:#00bf00}
.lime-background{background-color:#00fa00}
.maroon{color:#600000}
.maroon-background{background-color:#7d0000}
.navy{color:#000060}
.navy-background{background-color:#00007d}
.olive{color:#606000}
.olive-background{background-color:#7d7d00}
.purple{color:#600060}
.purple-background{background-color:#7d007d}
.red{color:#bf0000}
.red-background{background-color:#fa0000}
.silver{color:#909090}
.silver-background{background-color:#bcbcbc}
.teal{color:#006060}
.teal-background{background-color:#007d7d}
.white{color:#bfbfbf}
.white-background{background-color:#fafafa}
.yellow{color:#bfbf00}
.yellow-background{background-color:#fafa00}
span.icon>.fa{cursor:default}
.admonitionblock td.icon [class^="fa icon-"]{font-size:2.5em;text-shadow:1px 1px 2px rgba(0,0,0,.5);cursor:default}
.admonitionblock td.icon .icon-note:before{content:"\f05a";color:#19407c}
.admonitionblock td.icon .icon-tip:before{content:"\f0eb";text-shadow:1px 1px 2px rgba(155,155,0,.8);color:#111}
.admonitionblock td.icon .icon-warning:before{content:"\f071";color:#bf6900}
.admonitionblock td.icon .icon-caution:before{content:"\f06d";color:#bf3400}
.admonitionblock td.icon .icon-important:before{content:"\f06a";color:#bf0000}
.conum[data-value]{display:inline-block;color:#fff!important;background-color:rgba(0,0,0,.8);-webkit-border-radius:100px;border-radius:100px;text-align:center;font-size:.75em;width:1.67em;height:1.67em;line-height:1.67em;font-family:"Open Sans","DejaVu Sans",sans-serif;font-style:normal;font-weight:bold}
.conum[data-value] *{color:#fff!important}
.conum[data-value]+b{display:none}
.conum[data-value]:after{content:attr(data-value)}
pre .conum[data-value]{position:relative;top:-.125em}
b.conum *{color:inherit!important}
.conum:not([data-value]):empty{display:none}
h1,h2{letter-spacing:-.01em}
dt,th.tableblock,td.content{text-rendering:optimizeLegibility}
p,td.content{letter-spacing:-.01em}
p strong,td.content strong{letter-spacing:-.005em}
p,blockquote,dt,td.content{font-size:1.0625rem}
p{margin-bottom:1.25rem}
.sidebarblock p,.sidebarblock dt,.sidebarblock td.content,p.tableblock{font-size:1em}
.exampleblock>.content{background-color:#fffef7;border-color:#e0e0dc;-webkit-box-shadow:0 1px 4px #e0e0dc;box-shadow:0 1px 4px #e0e0dc}
.print-only{display:none!important}
@media print{@page{margin:1.25cm .75cm}
*{-webkit-box-shadow:none!important;box-shadow:none!important;text-shadow:none!important}
a{color:inherit!important;text-decoration:underline!important}
a.bare,a[href^="#"],a[href^="mailto:"]{text-decoration:none!important}
a[href^="http:"]:not(.bare):after,a[href^="https:"]:not(.bare):after{content:"(" attr(href) ")";display:inline-block;font-size:.875em;padding-left:.25em}
abbr[title]:after{content:" (" attr(title) ")"}
pre,blockquote,tr,img{page-break-inside:avoid}
thead{display:table-header-group}
img{max-width:100%!important}
p,blockquote,dt,td.content{font-size:1em;orphans:3;widows:3}
h2,h3,#toctitle,.sidebarblock>.content>.title{page-break-after:avoid}
#toc,.sidebarblock,.exampleblock>.content{background:none!important}
#toc{border-bottom:1px solid #ddddd8!important;padding-bottom:0!important}
.sect1{padding-bottom:0!important}
.sect1+.sect1{border:0!important}
#header>h1:first-child{margin-top:1.25rem}
body.book #header{text-align:center}
body.book #header>h1:first-child{border:0!important;margin:2.5em 0 1em 0}
body.book #header .details{border:0!important;display:block;padding:0!important}
body.book #header .details span:first-child{margin-left:0!important}
body.book #header .details br{display:block}
body.book #header .details br+span:before{content:none!important}
body.book #toc{border:0!important;text-align:left!important;padding:0!important;margin:0!important}
body.book #toc,body.book #preamble,body.book h1.sect0,body.book .sect1>h2{page-break-before:always}
.listingblock code[data-lang]:before{display:block}
#footer{background:none!important;padding:0 .9375em}
#footer-text{color:rgba(0,0,0,.6)!important;font-size:.9em}
.hide-on-print{display:none!important}
.print-only{display:block!important}
.hide-for-print{display:none!important}
.show-for-print{display:inherit!important}}

View File

@ -1 +0,0 @@
../../site/resources/images/

View File

@ -1,90 +0,0 @@
<?xml version="1.0"?>
<xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform" version="1.0">
<!--
/**
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
This stylesheet is used making an html version of hbase-default.adoc.
-->
<xsl:output method="text"/>
<!-- Normalize space -->
<xsl:template match="text()">
<xsl:if test="normalize-space(.)">
<xsl:value-of select="normalize-space(.)"/>
</xsl:if>
</xsl:template>
<!-- Grab nodes of the <configuration> element -->
<xsl:template match="configuration">
<!-- Print the license at the top of the file -->
////
/**
*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
////
:doctype: book
:numbered:
:toc: left
:icons: font
:experimental:
[[hbase_default_configurations]]
=== HBase Default Configuration
The documentation below is generated using the default hbase configuration file, _hbase-default.xml_, as source.
<xsl:for-each select="property">
<xsl:if test="not(@skipInDoc)">
[[<xsl:apply-templates select="name"/>]]
`<xsl:apply-templates select="name"/>`::
+
.Description
<xsl:apply-templates select="description"/>
+
.Default
<xsl:choose>
<xsl:when test="value != ''">`<xsl:apply-templates select="value"/>`
</xsl:when>
<xsl:otherwise>none</xsl:otherwise>
</xsl:choose>
</xsl:if>
</xsl:for-each>
</xsl:template>
</xsl:stylesheet>

View File

@ -103,9 +103,6 @@
<item name="Plugins" href="plugins.html"/>
</menu>
<menu name="Documentation and API">
<item name="Reference Guide" href="book.html" target="_blank" />
<item name="Reference Guide (PDF)" href="apache_hbase_reference_guide.pdf" target="_blank" />
<item name="Getting Started" href="book.html#quickstart" target="_blank" />
<item name="User API" href="apidocs/index.html" target="_blank" />
<item name="User API (Test)" href="testapidocs/index.html" target="_blank" />
<item name="Developer API" href="https://hbase.apache.org/2.0/devapidocs/index.html" target="_blank" />
@ -119,16 +116,6 @@
<item name="Metrics" href="metrics.html" target="_blank" />
<item name="HBase on Windows" href="cygwin.html" target="_blank" />
<item name="Cluster replication" href="book.html#replication" target="_blank" />
<item name="1.2 Documentation">
<item name="API" href="1.2/apidocs/index.html" target="_blank" />
<item name="X-Ref" href="1.2/xref/index.html" target="_blank" />
<item name="Ref Guide (single-page)" href="1.2/book.html" target="_blank" />
</item>
<item name="1.1 Documentation">
<item name="API" href="1.1/apidocs/index.html" target="_blank" />
<item name="X-Ref" href="1.1/xref/index.html" target="_blank" />
<item name="Ref Guide (single-page)" href="1.1/book.html" target="_blank" />
</item>
</menu>
<menu name="ASF">
<item name="Apache Software Foundation" href="http://www.apache.org/foundation/" target="_blank" />