HBASE-12585 Fix refguide so it does hbase 1.0 style API everywhere

with callout on how we used to do it in pre-1.0
This commit is contained in:
Misty Stanley-Jones 2015-02-10 13:14:25 +10:00
parent a8d325eed8
commit a0f2bc07b2
11 changed files with 92 additions and 78 deletions

View File

@ -202,24 +202,24 @@ HBaseConfiguration conf2 = HBaseConfiguration.create();
HTable table2 = new HTable(conf2, "myTable");
----
For more information about how connections are handled in the HBase client, see link:http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/HConnectionManager.html[HConnectionManager].
For more information about how connections are handled in the HBase client, see link:http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/ConnectionFactory.html[ConnectionFactory].
[[client.connection.pooling]]
===== Connection Pooling
For applications which require high-end multithreaded access (e.g., web-servers or application servers that may serve many application threads in a single JVM), you can pre-create an `HConnection`, as shown in the following example:
For applications which require high-end multithreaded access (e.g., web-servers or application servers that may serve many application threads in a single JVM), you can pre-create a `Connection`, as shown in the following example:
.Pre-Creating a `HConnection`
.Pre-Creating a `Connection`
====
[source,java]
----
// Create a connection to the cluster.
HConnection connection = HConnectionManager.createConnection(Configuration);
HTableInterface table = connection.getTable("myTable");
// use table as needed, the table returned is lightweight
table.close();
// use the connection for other access to the cluster
connection.close();
Configuration conf = HBaseConfiguration.create();
try (Connection connection = ConnectionFactory.createConnection(conf)) {
try (Table table = connection.getTable(TableName.valueOf(tablename)) {
// use table as needed, the table returned is lightweight
}
}
----
====
@ -228,22 +228,20 @@ Constructing HTableInterface implementation is very lightweight and resources ar
.`HTablePool` is Deprecated
[WARNING]
====
Previous versions of this guide discussed `HTablePool`, which was deprecated in HBase 0.94, 0.95, and 0.96, and removed in 0.98.1, by link:https://issues.apache.org/jira/browse/HBASE-6580[HBASE-6500].
Please use link:http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/HConnection.html[HConnection] instead.
Previous versions of this guide discussed `HTablePool`, which was deprecated in HBase 0.94, 0.95, and 0.96, and removed in 0.98.1, by link:https://issues.apache.org/jira/browse/HBASE-6580[HBASE-6500], or `HConnection`, which is deprecated in HBase 1.0 by `Connection`.
Please use link:http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Connection.html[Connection] instead.
====
[[client.writebuffer]]
=== WriteBuffer and Batch Methods
If <<perf.hbase.client.autoflush>> is turned off on link:http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/HTable.html[HTable], ``Put``s are sent to RegionServers when the writebuffer is filled.
The writebuffer is 2MB by default.
Before an (H)Table instance is discarded, either `close()` or `flushCommits()` should be invoked so Puts will not be lost.
In HBase 1.0 and later, link:http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/HTable.html[HTable] is deprecated in favor of link:http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Table.html[Table]. `Table` does not use autoflush. To do buffered writes, use the BufferedMutator class.
NOTE: `htable.delete(Delete);` does not go in the writebuffer! This only applies to Puts.
Before a `Table` or `HTable` instance is discarded, invoke either `close()` or `flushCommits()`, so `Put`s will not be lost.
For additional information on write durability, review the link:../acid-semantics.html[ACID semantics] page.
For fine-grained control of batching of ``Put``s or ``Delete``s, see the link:http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/HTable.html#batch%28java.util.List%29[batch] methods on HTable.
For fine-grained control of batching of ``Put``s or ``Delete``s, see the link:http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Table.html#batch%28java.util.List%29[batch] methods on Table.
[[client.external]]
=== External Clients
@ -523,7 +521,7 @@ The methods exposed by `HMasterInterface` are primarily metadata-oriented method
* Table (createTable, modifyTable, removeTable, enable, disable)
* ColumnFamily (addColumn, modifyColumn, removeColumn)
* Region (move, assign, unassign) For example, when the `HBaseAdmin` method `disableTable` is invoked, it is serviced by the Master server.
* Region (move, assign, unassign) For example, when the `Admin` method `disableTable` is invoked, it is serviced by the Master server.
[[master.processes]]
=== Processes
@ -557,7 +555,7 @@ In a distributed cluster, a RegionServer runs on a <<arch.hdfs.dn>>.
The methods exposed by `HRegionRegionInterface` contain both data-oriented and region-maintenance methods:
* Data (get, put, delete, next, etc.)
* Region (splitRegion, compactRegion, etc.) For example, when the `HBaseAdmin` method `majorCompact` is invoked on a table, the client is actually iterating through all regions for the specified table and requesting a major compaction directly to each region.
* Region (splitRegion, compactRegion, etc.) For example, when the `Admin` method `majorCompact` is invoked on a table, the client is actually iterating through all regions for the specified table and requesting a major compaction directly to each region.
[[regionserver.arch.processes]]
=== Processes
@ -2310,7 +2308,7 @@ Ensure to set the following for all clients (and servers) that will use region r
<name>hbase.client.primaryCallTimeout.multiget</name>
<value>10000</value>
<description>
The timeout (in microseconds), before secondary fallback RPCs are submitted for multi-get requests (HTable.get(List<Get>)) with Consistency.TIMELINE to the secondary replicas of the regions. Defaults to 10ms. Setting this lower will increase the number of RPCs, but will lower the p99 latencies.
The timeout (in microseconds), before secondary fallback RPCs are submitted for multi-get requests (Table.get(List<Get>)) with Consistency.TIMELINE to the secondary replicas of the regions. Defaults to 10ms. Setting this lower will increase the number of RPCs, but will lower the p99 latencies.
</description>
</property>
<property>
@ -2346,7 +2344,7 @@ flush 't1'
[source,java]
----
HTableDescriptor htd = new HTableDesctiptor(TableName.valueOf(“test_table”));
HTableDescriptor htd = new HTableDescriptor(TableName.valueOf(“test_table”));
htd.setRegionReplication(2);
...
admin.createTable(htd);

View File

@ -626,7 +626,7 @@ Configuration config = HBaseConfiguration.create();
config.set("hbase.zookeeper.quorum", "localhost"); // Here we are running zookeeper locally
----
If multiple ZooKeeper instances make up your ZooKeeper ensemble, they may be specified in a comma-separated list (just as in the _hbase-site.xml_ file). This populated `Configuration` instance can then be passed to an link:http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/HTable.html[HTable], and so on.
If multiple ZooKeeper instances make up your ZooKeeper ensemble, they may be specified in a comma-separated list (just as in the _hbase-site.xml_ file). This populated `Configuration` instance can then be passed to an link:http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Table.html[Table], and so on.
[[example_config]]
== Example Configurations
@ -867,7 +867,7 @@ See the entry for `hbase.hregion.majorcompaction` in the <<compaction.parameters
====
Major compactions are absolutely necessary for StoreFile clean-up.
Do not disable them altogether.
You can run major compactions manually via the HBase shell or via the link:http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/HBaseAdmin.html#majorCompact%28java.lang.String%29[HBaseAdmin API].
You can run major compactions manually via the HBase shell or via the http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Admin.html#majorCompact(org.apache.hadoop.hbase.TableName)[Admin API].
====
For more information about compactions and the compaction file selection process, see <<compaction,compaction>>

View File

@ -316,7 +316,7 @@ Note that generally the easiest way to specify a specific stop point for a scan
=== Delete
link:http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Delete.html[Delete] removes a row from a table.
Deletes are executed via link:http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Table.html#delete(org.apache.hadoop.hbase.client.Delete)[HTable.delete].
Deletes are executed via link:http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Table.html#delete(org.apache.hadoop.hbase.client.Delete)[Table.delete].
HBase does not modify data in place, and so deletes are handled by creating new markers called _tombstones_.
These tombstones, along with the dead values, are cleaned up on major compactions.

View File

@ -38,11 +38,9 @@ See <<external_apis>> for more information.
.Create a Table Using Java
====
This example has been tested on HBase 0.96.1.1.
[source,java]
----
package com.example.hbase.admin;
import java.io.IOException;
@ -51,7 +49,7 @@ import org.apache.hadoop.hbase.HBaseConfiguration;
import org.apache.hadoop.hbase.HColumnDescriptor;
import org.apache.hadoop.hbase.HTableDescriptor;
import org.apache.hadoop.hbase.TableName;
import org.apache.hadoop.hbase.client.HBaseAdmin;
import org.apache.hadoop.hbase.client.Admin;
import org.apache.hadoop.hbase.io.compress.Compression.Algorithm;
import org.apache.hadoop.conf.Configuration;
@ -59,7 +57,7 @@ import static com.example.hbase.Constants.*;
public class CreateSchema {
public static void createOrOverwrite(HBaseAdmin admin, HTableDescriptor table) throws IOException {
public static void createOrOverwrite(Admin admin, HTableDescriptor table) throws IOException {
if (admin.tableExists(table.getName())) {
admin.disableTable(table.getName());
admin.deleteTable(table.getName());
@ -69,7 +67,7 @@ public class CreateSchema {
public static void createSchemaTables (Configuration config) {
try {
final HBaseAdmin admin = new HBaseAdmin(config);
final Admin admin = new Admin(config);
HTableDescriptor table = new HTableDescriptor(TableName.valueOf(TABLE_NAME));
table.addFamily(new HColumnDescriptor(CF_DEFAULT).setCompressionType(Algorithm.SNAPPY));
@ -90,14 +88,13 @@ public class CreateSchema {
.Add, Modify, and Delete a Table
====
This example has been tested on HBase 0.96.1.1.
[source,java]
----
public static void upgradeFrom0 (Configuration config) {
try {
final HBaseAdmin admin = new HBaseAdmin(config);
final Admin admin = new Admin(config);
TableName tableName = TableName.valueOf(TABLE_ASSETMETA);
HTableDescriptor table_assetmeta = new HTableDescriptor(tableName);
table_assetmeta.addFamily(new HColumnDescriptor(CF_DEFAULT).setCompressionType(Algorithm.SNAPPY));

View File

@ -106,7 +106,7 @@ private static final int ERROR_EXIT_CODE = 4;
----
Here are some examples based on the following given case.
There are two HTable called test-01 and test-02, they have two column family cf1 and cf2 respectively, and deployed on the 3 RegionServers.
There are two Table objects called test-01 and test-02, they have two column family cf1 and cf2 respectively, and deployed on the 3 RegionServers.
see following table.
[cols="1,1,1", options="header"]
@ -665,7 +665,7 @@ The LoadTestTool has received many updates in recent HBase releases, including s
[[ops.regionmgt.majorcompact]]
=== Major Compaction
Major compactions can be requested via the HBase shell or link:http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/HBaseAdmin.html#majorCompact%28java.lang.String%29[HBaseAdmin.majorCompact].
Major compactions can be requested via the HBase shell or link:http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Admin.html#majorCompact%28java.lang.String%29[Admin.majorCompact].
Note: major compactions do NOT do region merges.
See <<compaction,compaction>> for more information about compactions.
@ -1352,7 +1352,7 @@ A single WAL edit goes through several steps in order to be replicated to a slav
. The edit is tagged with the master's UUID and added to a buffer.
When the buffer is filled, or the reader reaches the end of the file, the buffer is sent to a random region server on the slave cluster.
. The region server reads the edits sequentially and separates them into buffers, one buffer per table.
After all edits are read, each buffer is flushed using link:http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/HTable.html[HTable], HBase's normal client.
After all edits are read, each buffer is flushed using link:http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Table.html[Table], HBase's normal client.
The master's UUID and the UUIDs of slaves which have already consumed the data are preserved in the edits they are applied, in order to prevent replication loops.
. In the master, the offset for the WAL that is currently being replicated is registered in ZooKeeper.
@ -1994,7 +1994,7 @@ or in code it would be as follows:
[source,java]
----
void rename(HBaseAdmin admin, String oldTableName, String newTableName) {
void rename(Admin admin, String oldTableName, String newTableName) {
String snapshotName = randomName();
admin.disableTable(oldTableName);
admin.snapshot(snapshotName, oldTableName);

View File

@ -439,7 +439,7 @@ When people get started with HBase they have a tendency to write code that looks
[source,java]
----
Get get = new Get(rowkey);
Result r = htable.get(get);
Result r = table.get(get);
byte[] b = r.getValue(Bytes.toBytes("cf"), Bytes.toBytes("attr")); // returns current version of value
----
@ -452,7 +452,7 @@ public static final byte[] CF = "cf".getBytes();
public static final byte[] ATTR = "attr".getBytes();
...
Get get = new Get(rowkey);
Result r = htable.get(get);
Result r = table.get(get);
byte[] b = r.getValue(CF, ATTR); // returns current version of value
----
@ -475,7 +475,7 @@ A useful pattern to speed up the bulk import process is to pre-create empty regi
Be somewhat conservative in this, because too-many regions can actually degrade performance.
There are two different approaches to pre-creating splits.
The first approach is to rely on the default `HBaseAdmin` strategy (which is implemented in `Bytes.split`)...
The first approach is to rely on the default `Admin` strategy (which is implemented in `Bytes.split`)...
[source,java]
----
@ -511,12 +511,12 @@ The default value of `hbase.regionserver.optionallogflushinterval` is 1000ms.
[[perf.hbase.client.autoflush]]
=== HBase Client: AutoFlush
When performing a lot of Puts, make sure that setAutoFlush is set to false on your link:http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/HTable.html[HTable] instance.
When performing a lot of Puts, make sure that setAutoFlush is set to false on your link:http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Table.html[Table] instance.
Otherwise, the Puts will be sent one at a time to the RegionServer.
Puts added via `htable.add(Put)` and `htable.add( <List> Put)` wind up in the same write buffer.
Puts added via `table.add(Put)` and `table.add( <List> Put)` wind up in the same write buffer.
If `autoFlush = false`, these messages are not sent until the write-buffer is filled.
To explicitly flush the messages, call `flushCommits`.
Calling `close` on the `HTable` instance will invoke `flushCommits`.
Calling `close` on the `Table` instance will invoke `flushCommits`.
[[perf.hbase.client.putwal]]
=== HBase Client: Turn off WAL on Puts
@ -553,7 +553,7 @@ If all your data is being written to one region at a time, then re-read the sect
Also, if you are pre-splitting regions and all your data is _still_ winding up in a single region even though your keys aren't monotonically increasing, confirm that your keyspace actually works with the split strategy.
There are a variety of reasons that regions may appear "well split" but won't work with your data.
As the HBase client communicates directly with the RegionServers, this can be obtained via link:hhttp://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/HTable.html#getRegionLocation(byte[])[HTable.getRegionLocation].
As the HBase client communicates directly with the RegionServers, this can be obtained via link:hhttp://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Table.html#getRegionLocation(byte[])[Table.getRegionLocation].
See <<precreate.regions>>, as well as <<perf.configurations>>
@ -622,14 +622,14 @@ Always have ResultScanner processing enclosed in try/catch blocks.
----
Scan scan = new Scan();
// set attrs...
ResultScanner rs = htable.getScanner(scan);
ResultScanner rs = table.getScanner(scan);
try {
for (Result r = rs.next(); r != null; r = rs.next()) {
// process result...
} finally {
rs.close(); // always close the ResultScanner!
}
htable.close();
table.close();
----
[[perf.hbase.client.blockcache]]
@ -761,16 +761,16 @@ In this case, special care must be taken to regularly perform major compactions
As is documented in <<datamodel>>, marking rows as deleted creates additional StoreFiles which then need to be processed on reads.
Tombstones only get cleaned up with major compactions.
See also <<compaction>> and link:http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/HBaseAdmin.html#majorCompact%28java.lang.String%29[HBaseAdmin.majorCompact].
See also <<compaction>> and link:http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Admin.html#majorCompact%28java.lang.String%29[Admin.majorCompact].
[[perf.deleting.rpc]]
=== Delete RPC Behavior
Be aware that `htable.delete(Delete)` doesn't use the writeBuffer.
Be aware that `Table.delete(Delete)` doesn't use the writeBuffer.
It will execute an RegionServer RPC with each invocation.
For a large number of deletes, consider `htable.delete(List)`.
For a large number of deletes, consider `Table.delete(List)`.
See http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/HTable.html#delete%28org.apache.hadoop.hbase.client.Delete%29
See http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Table.html#delete%28org.apache.hadoop.hbase.client.Delete%29
[[perf.hdfs]]
== HDFS

View File

@ -32,7 +32,7 @@ A good general introduction on the strength and weaknesses modelling on the vari
[[schema.creation]]
== Schema Creation
HBase schemas can be created or updated using the <<shell>> or by using link:http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/HBaseAdmin.html[HBaseAdmin] in the Java API.
HBase schemas can be created or updated using the <<shell>> or by using link:http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Admin.html[Admin] in the Java API.
Tables must be disabled when making ColumnFamily modifications, for example:
@ -40,7 +40,7 @@ Tables must be disabled when making ColumnFamily modifications, for example:
----
Configuration config = HBaseConfiguration.create();
HBaseAdmin admin = new HBaseAdmin(conf);
Admin admin = new Admin(conf);
String table = "myTable";
admin.disableTable(table);
@ -308,7 +308,7 @@ This is a fairly common question on the HBase dist-list so it pays to get the ro
=== Relationship Between RowKeys and Region Splits
If you pre-split your table, it is _critical_ to understand how your rowkey will be distributed across the region boundaries.
As an example of why this is important, consider the example of using displayable hex characters as the lead position of the key (e.g., "0000000000000000" to "ffffffffffffffff"). Running those key ranges through `Bytes.split` (which is the split strategy used when creating regions in `HBaseAdmin.createTable(byte[] startKey, byte[] endKey, numRegions)` for 10 regions will generate the following splits...
As an example of why this is important, consider the example of using displayable hex characters as the lead position of the key (e.g., "0000000000000000" to "ffffffffffffffff"). Running those key ranges through `Bytes.split` (which is the split strategy used when creating regions in `Admin.createTable(byte[] startKey, byte[] endKey, numRegions)` for 10 regions will generate the following splits...
----
@ -340,7 +340,7 @@ To conclude this example, the following is an example of how appropriate splits
[source,java]
----
public static boolean createTable(HBaseAdmin admin, HTableDescriptor table, byte[][] splits)
public static boolean createTable(Admin admin, HTableDescriptor table, byte[][] splits)
throws IOException {
try {
admin.createTable( table, splits );
@ -400,7 +400,7 @@ Take that into consideration when making your design, as well as block size for
=== Counters
One supported datatype that deserves special mention are "counters" (i.e., the ability to do atomic increments of numbers). See link:http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/HTable.html#increment%28org.apache.hadoop.hbase.client.Increment%29[Increment] in HTable.
One supported datatype that deserves special mention are "counters" (i.e., the ability to do atomic increments of numbers). See link:http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Table.html#increment%28org.apache.hadoop.hbase.client.Increment%29[Increment] in `Table`.
Synchronization on counters are done on the RegionServer, not in the client.
@ -630,7 +630,7 @@ The rowkey of LOG_TYPES would be:
* [type] (e.g., byte indicating hostname vs. event-type)
* [bytes] variable length bytes for raw hostname or event-type.
A column for this rowkey could be a long with an assigned number, which could be obtained by using an link:http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/HTable.html#incrementColumnValue%28byte[],%20byte[],%20byte[],%20long%29[HBase counter].
A column for this rowkey could be a long with an assigned number, which could be obtained by using an link:http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Table.html#incrementColumnValue%28byte[],%20byte[],%20byte[],%20long%29[HBase counter].
So the resulting composite rowkey would be:

View File

@ -131,14 +131,19 @@ To do so, add the following to the `hbase-site.xml` file on every client:
</property>
----
This configuration property can also be set on a per connection basis.
Set it in the `Configuration` supplied to `HTable`:
This configuration property can also be set on a per-connection basis.
Set it in the `Configuration` supplied to `Table`:
[source,java]
----
Configuration conf = HBaseConfiguration.create();
Connection connection = ConnectionFactory.createConnection(conf);
conf.set("hbase.rpc.protection", "privacy");
HTable table = new HTable(conf, tablename);
try (Connection connection = ConnectionFactory.createConnection(conf)) {
try (Table table = connection.getTable(TableName.valueOf(tablename)) {
.... do your stuff
}
}
----
Expect a ~10% performance penalty for encrypted communication.
@ -881,18 +886,24 @@ public static void grantOnTable(final HBaseTestingUtility util, final String use
SecureTestUtil.updateACLs(util, new Callable<Void>() {
@Override
public Void call() throws Exception {
HTable acl = new HTable(util.getConfiguration(), AccessControlLists.ACL_TABLE_NAME);
try {
BlockingRpcChannel service = acl.coprocessorService(HConstants.EMPTY_START_ROW);
AccessControlService.BlockingInterface protocol =
AccessControlService.newBlockingStub(service);
ProtobufUtil.grant(protocol, user, table, family, qualifier, actions);
} finally {
acl.close();
Configuration conf = HBaseConfiguration.create();
Connection connection = ConnectionFactory.createConnection(conf);
try (Connection connection = ConnectionFactory.createConnection(conf)) {
try (Table table = connection.getTable(TableName.valueOf(tablename)) {
AccessControlLists.ACL_TABLE_NAME);
try {
BlockingRpcChannel service = acl.coprocessorService(HConstants.EMPTY_START_ROW);
AccessControlService.BlockingInterface protocol =
AccessControlService.newBlockingStub(service);
ProtobufUtil.grant(protocol, user, table, family, qualifier, actions);
} finally {
acl.close();
}
return null;
}
}
return null;
}
});
}
}
----
@ -931,7 +942,9 @@ public static void revokeFromTable(final HBaseTestingUtility util, final String
SecureTestUtil.updateACLs(util, new Callable<Void>() {
@Override
public Void call() throws Exception {
HTable acl = new HTable(util.getConfiguration(), AccessControlLists.ACL_TABLE_NAME);
Configuration conf = HBaseConfiguration.create();
Connection connection = ConnectionFactory.createConnection(conf);
Table acl = connection.getTable(util.getConfiguration(), AccessControlLists.ACL_TABLE_NAME);
try {
BlockingRpcChannel service = acl.coprocessorService(HConstants.EMPTY_START_ROW);
AccessControlService.BlockingInterface protocol =
@ -1215,9 +1228,11 @@ The correct way to apply cell level labels is to do so in the application code w
====
[source,java]
----
static HTable createTableAndWriteDataWithLabels(TableName tableName, String... labelExps)
static Table createTableAndWriteDataWithLabels(TableName tableName, String... labelExps)
throws Exception {
HTable table = null;
Configuration conf = HBaseConfiguration.create();
Connection connection = ConnectionFactory.createConnection(conf);
Table table = NULL;
try {
table = TEST_UTIL.createTable(tableName, fam);
int i = 1;

View File

@ -124,8 +124,9 @@ For example, if you wanted to trace all of your get operations, you change this:
[source,java]
----
HTable table = new HTable(conf, "t1");
Configuration config = HBaseConfiguration.create();
Connection connection = ConnectionFactory.createConnection(config);
Table table = connection.getTable(TableName.valueOf("t1"));
Get get = new Get(Bytes.toBytes("r1"));
Result res = table.get(get);
----
@ -137,7 +138,7 @@ into:
TraceScope ts = Trace.startSpan("Gets", Sampler.ALWAYS);
try {
HTable table = new HTable(conf, "t1");
Table table = connection.getTable(TableName.valueOf("t1"));
Get get = new Get(Bytes.toBytes("r1"));
Result res = table.get(get);
} finally {

View File

@ -627,6 +627,7 @@ This issue is caused by bugs in the MIT Kerberos replay_cache component, link:ht
These bugs caused the old version of krb5-server to erroneously block subsequent requests sent from a Principal.
This caused krb5-server to block the connections sent from one Client (one HTable instance with multi-threading connection instances for each RegionServer); Messages, such as `Request is a replay (34)`, are logged in the client log You can ignore the messages, because HTable will retry 5 * 10 (50) times for each failed connection by default.
HTable will throw IOException if any connection to the RegionServer fails after the retries, so that the user client code for HTable instance can handle it further.
NOTE: `HTable` is deprecated in HBase 1.0, in favor of `Table`.
Alternatively, update krb5-server to a version which solves these issues, such as krb5-server-1.10.3.
See JIRA link:https://issues.apache.org/jira/browse/HBASE-10379[HBASE-10379] for more details.

View File

@ -42,7 +42,7 @@ This example will add unit tests to the following example class:
public class MyHBaseDAO {
public static void insertRecord(HTableInterface table, HBaseTestObj obj)
public static void insertRecord(Table.getTable(table), HBaseTestObj obj)
throws Exception {
Put put = createPut(obj);
table.put(put);
@ -129,17 +129,19 @@ Next, add a `@RunWith` annotation to your test class, to direct it to use Mockit
@RunWith(MockitoJUnitRunner.class)
public class TestMyHBaseDAO{
@Mock
private HTableInterface table;
@Mock
private HTablePool hTablePool;
Configuration config = HBaseConfiguration.create();
@Mock
Connection connection = ConnectionFactory.createConnection(config);
@Mock
private Table table;
@Captor
private ArgumentCaptor putCaptor;
@Test
public void testInsertRecord() throws Exception {
//return mock table when getTable is called
when(hTablePool.getTable("tablename")).thenReturn(table);
when(connection.getTable(TableName.valueOf("tablename")).thenReturn(table);
//create test object and make a call to the DAO that needs testing
HBaseTestObj obj = new HBaseTestObj();
obj.setRowKey("ROWKEY-1");
@ -162,7 +164,7 @@ This code populates `HBaseTestObj` with ``ROWKEY-1'', ``DATA-1'', ``DATA-2'' as
It then inserts the record into the mocked table.
The Put that the DAO would have inserted is captured, and values are tested to verify that they are what you expected them to be.
The key here is to manage htable pool and htable instance creation outside the DAO.
The key here is to manage Connection and Table instance creation outside the DAO.
This allows you to mock them cleanly and test Puts as shown above.
Similarly, you can now expand into other operations such as Get, Scan, or Delete.