HBASE-26899 Run spotless:apply

This commit is contained in:
Duo Zhang 2022-05-01 22:52:40 +08:00
parent 8ac84a7f1d
commit 1aea663c6d
4584 changed files with 119787 additions and 143318 deletions

View File

@ -32,7 +32,7 @@ These release notes cover new developer and user-facing incompatibilities, impor
See the document http://hbase.apache.org/book.html#upgrade2.2 about how to upgrade from 2.0 or 2.1 to 2.2+.
HBase 2.2+ uses a new Procedure form assiging/unassigning/moving Regions. It does not process HBase 2.1 and 2.0's Unassign/Assign Procedure types. Upgrade requires that we first drain the Master Procedure Store of old style Procedures before starting the new 2.2 Master. So you need to make sure that before you kill the old version (2.0 or 2.1) Master, there is no region in transition. And once the new version (2.2+) Master is up, you can rolling upgrade RegionServers one by one.
HBase 2.2+ uses a new Procedure form assiging/unassigning/moving Regions. It does not process HBase 2.1 and 2.0's Unassign/Assign Procedure types. Upgrade requires that we first drain the Master Procedure Store of old style Procedures before starting the new 2.2 Master. So you need to make sure that before you kill the old version (2.0 or 2.1) Master, there is no region in transition. And once the new version (2.2+) Master is up, you can rolling upgrade RegionServers one by one.
And there is a more safer way if you are running 2.1.1+ or 2.0.3+ cluster. It need four steps to upgrade Master.
@ -421,15 +421,15 @@ Previously the recovered.edits directory was under the root directory. This JIRA
When oldwals (and hfile) cleaner cleans stale wals (and hfiles), it will periodically check and wait the clean results from filesystem, the total wait time will be no more than a max time.
The periodically wait and check configurations are hbase.oldwals.cleaner.thread.check.interval.msec (default is 500 ms) and hbase.regionserver.hfilecleaner.thread.check.interval.msec (default is 1000 ms).
The periodically wait and check configurations are hbase.oldwals.cleaner.thread.check.interval.msec (default is 500 ms) and hbase.regionserver.hfilecleaner.thread.check.interval.msec (default is 1000 ms).
Meanwhile, The max time configurations are hbase.oldwals.cleaner.thread.timeout.msec and hbase.regionserver.hfilecleaner.thread.timeout.msec, they are set to 60 seconds by default.
All support dynamic configuration.
e.g. in the oldwals cleaning scenario, one may consider tuning hbase.oldwals.cleaner.thread.timeout.msec and hbase.oldwals.cleaner.thread.check.interval.msec
e.g. in the oldwals cleaning scenario, one may consider tuning hbase.oldwals.cleaner.thread.timeout.msec and hbase.oldwals.cleaner.thread.check.interval.msec
1. While deleting a oldwal never complete (strange but possible), then delete file task needs to wait for a max of 60 seconds. Here, 60 seconds might be too long, or the opposite way is to increase more than 60 seconds in the use cases of slow file delete.
1. While deleting a oldwal never complete (strange but possible), then delete file task needs to wait for a max of 60 seconds. Here, 60 seconds might be too long, or the opposite way is to increase more than 60 seconds in the use cases of slow file delete.
2. The check and wait of a file delete is set to default in the period of 500 milliseconds, one might want to tune this checking period to a short interval to check more frequently or to a longer interval to avoid checking too often to manage their delete file task checking period (the longer interval may be use to avoid checking too fast while using a high latency storage).
@ -461,12 +461,12 @@ Solution: After this jira, the compaction event tracker will be writed to HFile.
* [HBASE-21820](https://issues.apache.org/jira/browse/HBASE-21820) | *Major* | **Implement CLUSTER quota scope**
HBase contains two quota scopes: MACHINE and CLUSTER. Before this patch, set quota operations did not expose scope option to client api and use MACHINE as default, CLUSTER scope can not be set and used.
HBase contains two quota scopes: MACHINE and CLUSTER. Before this patch, set quota operations did not expose scope option to client api and use MACHINE as default, CLUSTER scope can not be set and used.
Shell commands are as follows:
set\_quota, TYPE =\> THROTTLE, TABLE =\> 't1', LIMIT =\> '10req/sec'
This issue implements CLUSTER scope in a simple way: For user, namespace, user over namespace quota, use [ClusterLimit / RSNum] as machine limit. For table and user over table quota, use [ClusterLimit / TotalTableRegionNum \* MachineTableRegionNum] as machine limit.
After this patch, user can set CLUSTER scope quota, but MACHINE is still default if user ignore scope.
After this patch, user can set CLUSTER scope quota, but MACHINE is still default if user ignore scope.
Shell commands are as follows:
set\_quota, TYPE =\> THROTTLE, TABLE =\> 't1', LIMIT =\> '10req/sec'
set\_quota, TYPE =\> THROTTLE, TABLE =\> 't1', LIMIT =\> '10req/sec', SCOPE =\> MACHINE
@ -491,11 +491,11 @@ Remove bloom filter type ROWPREFIX\_DELIMITED. May add it back when find a bette
* [HBASE-21783](https://issues.apache.org/jira/browse/HBASE-21783) | *Major* | **Support exceed user/table/ns throttle quota if region server has available quota**
Support enable or disable exceed throttle quota. Exceed throttle quota means, user can over consume user/namespace/table quota if region server has additional available quota because other users don't consume at the same time.
Support enable or disable exceed throttle quota. Exceed throttle quota means, user can over consume user/namespace/table quota if region server has additional available quota because other users don't consume at the same time.
Use the following shell commands to enable/disable exceed throttle quota: enable\_exceed\_throttle\_quota
disable\_exceed\_throttle\_quota
There are two limits when enable exceed throttle quota:
1. Must set at least one read and one write region server throttle quota;
There are two limits when enable exceed throttle quota:
1. Must set at least one read and one write region server throttle quota;
2. All region server throttle quotas must be in seconds time unit. Because once previous requests exceed their quota and consume region server quota, quota in other time units may be refilled in a long time, this may affect later requests.
@ -621,7 +621,7 @@ Add a clearRegionLocationCache method in Connection to clear the region location
* [HBASE-21713](https://issues.apache.org/jira/browse/HBASE-21713) | *Major* | **Support set region server throttle quota**
Support set region server rpc throttle quota which represents the read/write ability of region servers and throttles when region server's total requests exceeding the limit.
Support set region server rpc throttle quota which represents the read/write ability of region servers and throttles when region server's total requests exceeding the limit.
Use the following shell command to set RS quota:
set\_quota TYPE =\> THROTTLE, REGIONSERVER =\> 'all', THROTTLE\_TYPE =\> WRITE, LIMIT =\> '20000req/sec'
@ -650,7 +650,7 @@ Adds shell support for the following:
* [HBASE-21734](https://issues.apache.org/jira/browse/HBASE-21734) | *Major* | **Some optimization in FilterListWithOR**
After HBASE-21620, the filterListWithOR has been a bit slow because we need to merge each sub-filter's RC , while before HBASE-21620, we will skip many RC merging, but the logic was wrong. So here we choose another way to optimaze the performance: removing the KeyValueUtil#toNewKeyCell.
After HBASE-21620, the filterListWithOR has been a bit slow because we need to merge each sub-filter's RC , while before HBASE-21620, we will skip many RC merging, but the logic was wrong. So here we choose another way to optimaze the performance: removing the KeyValueUtil#toNewKeyCell.
Anoop Sam John suggested that the KeyValueUtil#toNewKeyCell can save some GC before because if we copy key part of cell into a single byte[], then the block the cell refering won't be refered by the filter list any more, the upper layer can GC the data block quickly. while after HBASE-21620, we will update the prevCellList for every encountered cell now, so the lifecycle of cell in prevCellList for FilterList will be quite shorter. so just use the cell ref for saving cpu.
BTW, we removed all the arrays streams usage in filter list, because it's also quite time-consuming in our test.
@ -702,15 +702,15 @@ Python3 support was added to dev-support/submit-patch.py. To install newly requi
In HBASE-21657, I simplified the path of estimatedSerialiedSize() & estimatedSerialiedSizeOfCell() by moving the general getSerializedSize()
and heapSize() from ExtendedCell to Cell interface. The patch also included some other improvments:
1. For 99% of case, our cells has no tags, so let the HFileScannerImpl just return the NoTagsByteBufferKeyValue if no tags, which means we can save
lots of cpu time when sending no tags cell to rpc because can just return the length instead of getting the serialize size by caculating offset/length
1. For 99% of case, our cells has no tags, so let the HFileScannerImpl just return the NoTagsByteBufferKeyValue if no tags, which means we can save
lots of cpu time when sending no tags cell to rpc because can just return the length instead of getting the serialize size by caculating offset/length
of each fields(row/cf/cq..)
2. Move the subclass's getSerializedSize implementation from ExtendedCell to their own class, which mean we did not need to call ExtendedCell's
getSerialiedSize() firstly, then forward to subclass's getSerializedSize(withTags).
3. Give a estimated result arraylist size for avoiding the frequent list extension when in a big scan, now we estimate the array size as min(scan.rows, 512).
it's also help a lot.
We gain almost ~40% throughput improvement in 100% scan case for branch-2 (cacheHitRatio~100%)[1], it's a good thing. While it's a incompatible change in
We gain almost ~40% throughput improvement in 100% scan case for branch-2 (cacheHitRatio~100%)[1], it's a good thing. While it's a incompatible change in
some case, such as if the upstream user implemented their own Cells, although it's rare but can happen, then their compile will be error.
@ -732,7 +732,7 @@ Before this issue, thrift1 server and thrift2 server are totally different serve
* [HBASE-21661](https://issues.apache.org/jira/browse/HBASE-21661) | *Major* | **Provide Thrift2 implementation of Table/Admin**
ThriftAdmin/ThriftTable are implemented based on Thrift2. With ThriftAdmin/ThriftTable, People can use thrift2 protocol just like HTable/HBaseAdmin.
ThriftAdmin/ThriftTable are implemented based on Thrift2. With ThriftAdmin/ThriftTable, People can use thrift2 protocol just like HTable/HBaseAdmin.
Example of using ThriftConnection
Configuration conf = HBaseConfiguration.create();
conf.set(ClusterConnection.HBASE\_CLIENT\_CONNECTION\_IMPL,ThriftConnection.class.getName());
@ -766,7 +766,7 @@ Add a new configuration "hbase.skip.load.duplicate.table.coprocessor". The defau
* [HBASE-21650](https://issues.apache.org/jira/browse/HBASE-21650) | *Major* | **Add DDL operation and some other miscellaneous to thrift2**
Added DDL operations and some other structure definition to thrift2. Methods added:
Added DDL operations and some other structure definition to thrift2. Methods added:
create/modify/addColumnFamily/deleteColumnFamily/modifyColumnFamily/enable/disable/truncate/delete table
create/modify/delete namespace
get(list)TableDescriptor(s)/get(list)NamespaceDescirptor(s)
@ -845,8 +845,8 @@ hbase(main):003:0> rit
hbase(main):004:0> unassign '56f0c38c81ae453d19906ce156a2d6a1'
0 row(s) in 0.0540 seconds
hbase(main):005:0> rit
IntegrationTestBigLinkedList,L\xCC\xCC\xCC\xCC\xCC\xCC\xCB,1539117183224.56f0c38c81ae453d19906ce156a2d6a1. state=PENDING_CLOSE, ts=Tue Oct 09 20:33:34 UTC 2018 (0s ago), server=null
hbase(main):005:0> rit
IntegrationTestBigLinkedList,L\xCC\xCC\xCC\xCC\xCC\xCC\xCB,1539117183224.56f0c38c81ae453d19906ce156a2d6a1. state=PENDING_CLOSE, ts=Tue Oct 09 20:33:34 UTC 2018 (0s ago), server=null
1 row(s) in 0.0170 seconds
```
@ -1329,7 +1329,7 @@ This represents an incompatible change for users who relied on this implementati
This enhances the AccessControlClient APIs to retrieve the permissions based on namespace, table name, family and qualifier for specific user. AccessControlClient can also validate a user whether allowed to perform specified operations on a particular table.
Following APIs have been added,
1) getUserPermissions(Connection connection, String tableRegex, byte[] columnFamily, byte[] columnQualifier, String userName)
1) getUserPermissions(Connection connection, String tableRegex, byte[] columnFamily, byte[] columnQualifier, String userName)
Scope of retrieving permission will be same as existing.
2) hasPermission(onnection connection, String tableName, byte[] columnFamily, byte[] columnQualifier, String userName, Permission.Action... actions)
Scope of validating user privilege,
@ -2095,11 +2095,11 @@ ColumnValueFilter provides a way to fetch matched cells only by providing specif
A region is flushed if its memory component exceeds the region flush threshold.
A flush policy decides which stores to flush by comparing the size of the store to a column-family-flush threshold.
If the overall size of all memstores in the machine exceeds the bounds defined by the administrator (denoted global pressure) a region is selected and flushed.
If the overall size of all memstores in the machine exceeds the bounds defined by the administrator (denoted global pressure) a region is selected and flushed.
HBASE-18294 changes flush decisions to be based on heap-occupancy and not data (key-value) size, consistently across levels. This rolls back some of the changes by HBASE-16747. Specifically,
(1) RSs, Regions and stores track their overall on-heap and off-heap occupancy,
(2) A region is flushed when its on-heap+off-heap size exceeds the region flush threshold specified in hbase.hregion.memstore.flush.size,
(3) The store to be flushed is chosen based on its on-heap+off-heap size
(3) The store to be flushed is chosen based on its on-heap+off-heap size
(4) At the RS level, a flush is triggered when the overall on-heap exceeds the on-heap limit, or when the overall off-heap size exceeds the off-heap limit (low/high water marks).
Note that when the region flush size is set to XXmb a region flush may be triggered even before writing keys and values of size XX because the total heap occupancy of the region which includes additional metadata exceeded the threshold.
@ -2615,13 +2615,13 @@ And for server side, the default hbase.client.serverside.retries.multiplier was
* [HBASE-18090](https://issues.apache.org/jira/browse/HBASE-18090) | *Major* | **Improve TableSnapshotInputFormat to allow more multiple mappers per region**
In this task, we make it possible to run multiple mappers per region in the table snapshot. The following code is primary table snapshot mapper initializatio:
In this task, we make it possible to run multiple mappers per region in the table snapshot. The following code is primary table snapshot mapper initializatio:
TableMapReduceUtil.initTableSnapshotMapperJob(
snapshotName, // The name of the snapshot (of a table) to read from
scan, // Scan instance to control CF and attribute selection
mapper, // mapper
outputKeyClass, // mapper output key
outputKeyClass, // mapper output key
outputValueClass, // mapper output value
job, // The current job to adjust
true, // upload HBase jars and jars for any of the configured job classes via the distributed cache (tmpjars)
@ -2634,7 +2634,7 @@ TableMapReduceUtil.initTableSnapshotMapperJob(
snapshotName, // The name of the snapshot (of a table) to read from
scan, // Scan instance to control CF and attribute selection
mapper, // mapper
outputKeyClass, // mapper output key
outputKeyClass, // mapper output key
outputValueClass, // mapper output value
job, // The current job to adjust
true, // upload HBase jars and jars for any of the configured job classes via the distributed cache (tmpjars)
@ -2672,7 +2672,7 @@ List\<Tag\> getTags()
Optional\<Tag\> getTag(byte type)
byte[] cloneTags()
The above APIs helps to read tags from the Cell.
The above APIs helps to read tags from the Cell.
CellUtil#createCell(Cell cell, List\<Tag\> tags)
CellUtil#createCell(Cell cell, byte[] tags)
@ -2808,7 +2808,7 @@ Change the import order rule that now we should put the shaded import at bottom.
* [HBASE-19187](https://issues.apache.org/jira/browse/HBASE-19187) | *Minor* | **Remove option to create on heap bucket cache**
Removing the on heap Bucket cache feature.
The config "hbase.bucketcache.ioengine" no longer support the 'heap' value.
The config "hbase.bucketcache.ioengine" no longer support the 'heap' value.
Its supported values now are 'offheap', 'file:\<path\>', 'files:\<path\>' and 'mmap:\<path\>'
@ -2964,12 +2964,12 @@ Removes blanket bypass mechanism (Observer#bypass). Instead, a curated subset of
The below methods have been marked deprecated in hbase2. We would have liked to have removed them because they use IA.Private parameters but they are in use by CoreCoprocessors or are critical to downstreamers and we have no alternatives to provide currently.
@Deprecated public boolean prePrepareTimeStampForDeleteVersion(final Mutation mutation, final Cell kv, final byte[] byteNow, final Get get) throws IOException {
@Deprecated public boolean preWALRestore(final RegionInfo info, final WALKey logKey, final WALEdit logEdit) throws IOException {
@Deprecated public void postWALRestore(final RegionInfo info, final WALKey logKey, final WALEdit logEdit) throws IOException {
@Deprecated public DeleteTracker postInstantiateDeleteTracker(DeleteTracker result) throws IOException
@Deprecated public DeleteTracker postInstantiateDeleteTracker(DeleteTracker result) throws IOException
Metrics are updated now even if the Coprocessor does a bypass; e.g. The put count is updated even if a Coprocessor bypasses the core put operation (We do it this way so no need for Coprocessors to have access to our core metrics system).
@ -3000,7 +3000,7 @@ Made defaults for Server#isStopping and Server#getFileSystem. Should have done t
* [HBASE-19047](https://issues.apache.org/jira/browse/HBASE-19047) | *Critical* | **CP exposed Scanner types should not extend Shipper**
RegionObserver#preScannerOpen signature changed
RegionScanner preScannerOpen( ObserverContext\<RegionCoprocessorEnvironment\> c, Scan scan, RegionScanner s) -\> void preScannerOpen( ObserverContext\<RegionCoprocessorEnvironment\> c, Scan scan)
RegionScanner preScannerOpen( ObserverContext\<RegionCoprocessorEnvironment\> c, Scan scan, RegionScanner s) -\> void preScannerOpen( ObserverContext\<RegionCoprocessorEnvironment\> c, Scan scan)
The pre hook can no longer return a RegionScanner instance.
@ -3084,12 +3084,12 @@ Add missing deprecation tag for long getRpcTimeout(TimeUnit unit) in AsyncTableB
* [HBASE-18410](https://issues.apache.org/jira/browse/HBASE-18410) | *Major* | **FilterList Improvement.**
In this task, we fixed all existing bugs in FilterList, and did the code refactor which ensured interface compatibility .
In this task, we fixed all existing bugs in FilterList, and did the code refactor which ensured interface compatibility .
The primary bug fixes are :
1. For sub-filter in FilterList with MUST\_PASS\_ONE, if previous filterKeyValue() of sub-filter returns NEXT\_COL, we cannot make sure that the next cell will be the first cell in next column, because FilterList choose the minimal forward step among sub-filters, and it may return a SKIP. so here we add an extra check to ensure that the next cell will match preivous return code for sub-filters.
The primary bug fixes are :
1. For sub-filter in FilterList with MUST\_PASS\_ONE, if previous filterKeyValue() of sub-filter returns NEXT\_COL, we cannot make sure that the next cell will be the first cell in next column, because FilterList choose the minimal forward step among sub-filters, and it may return a SKIP. so here we add an extra check to ensure that the next cell will match preivous return code for sub-filters.
2. Previous logic about transforming cell of FilterList is incorrect, we should set the previous transform result (rather than the given cell in question) as the initial vaule of transform cell before call filterKeyValue() of FilterList.
3. Handle the ReturnCodes which the previous code did not handle.
3. Handle the ReturnCodes which the previous code did not handle.
About code refactor, we divided the FilterList into two separated sub-classes: FilterListWithOR and FilterListWithAND, The FilterListWithOR has been optimised to choose the next minimal step to seek cell rather than SKIP cell one by one, and the FilterListWithAND has been optimised to choose the next maximal key to seek among sub-filters in filter list. All in all, The code in FilterList is clean and easier to follow now.
@ -3901,7 +3901,7 @@ Changes ObserverContext from a class to an interface and hides away constructor,
* [HBASE-18649](https://issues.apache.org/jira/browse/HBASE-18649) | *Major* | **Deprecate KV Usage in MR to move to Cells in 3.0**
All the mappers and reducers output type will be now of MapReduceCell type. No more KeyValue type. How ever in branch-2 for compatibility we have allowed the older interfaces/classes that work with KeyValue to stay in the code base but they have been marked as deprecated.
All the mappers and reducers output type will be now of MapReduceCell type. No more KeyValue type. How ever in branch-2 for compatibility we have allowed the older interfaces/classes that work with KeyValue to stay in the code base but they have been marked as deprecated.
The following interfaces/classes have been deprecated in branch-2
Import#KeyValueWritableComparablePartitioner
Import#KeyValueWritableComparator
@ -3936,8 +3936,8 @@ The changes of IA.Public/IA.LimitedPrivate classes are shown below:
HTableDescriptor class
\* boolean hasRegionMemstoreReplication()
+ boolean hasRegionMemStoreReplication()
\* HTableDescriptor setRegionMemstoreReplication(boolean)
+ HTableDescriptor setRegionMemStoreReplication(boolean)
\* HTableDescriptor setRegionMemstoreReplication(boolean)
+ HTableDescriptor setRegionMemStoreReplication(boolean)
RegionLoadStats class
\* int getMemstoreLoad()
@ -4013,8 +4013,8 @@ HBaseTestingUtility class
- void modifyTableSync(Admin admin, HTableDescriptor desc)
- HRegion createLocalHRegion(HTableDescriptor desc, byte [] startKey, byte [] endKey)
- HRegion createLocalHRegion(HRegionInfo info, HTableDescriptor desc)
- HRegion createLocalHRegion(HRegionInfo info, TableDescriptor desc)
+ HRegion createLocalHRegion(RegionInfo info, TableDescriptor desc)
- HRegion createLocalHRegion(HRegionInfo info, TableDescriptor desc)
+ HRegion createLocalHRegion(RegionInfo info, TableDescriptor desc)
- HRegion createLocalHRegion(HRegionInfo info, HTableDescriptor desc, WAL wal)
- HRegion createLocalHRegion(HRegionInfo info, TableDescriptor desc, WAL wal)
+ HRegion createLocalHRegion(RegionInfo info, TableDescriptor desc, WAL wal)
@ -4121,7 +4121,7 @@ We used to pass the RegionServerServices (RSS) which gave Coprocesosrs (CP) all
Removed method getRegionServerServices from CP exposed RegionCoprocessorEnvironment and RegionServerCoprocessorEnvironment and replaced with getCoprocessorRegionServerServices. This returns a new interface CoprocessorRegionServerServices which is only a subset of RegionServerServices. With that below methods are no longer exposed for CPs
WAL getWAL(HRegionInfo regionInfo)
List\<WAL\> getWALs()
List\<WAL\> getWALs()
FlushRequester getFlushRequester()
RegionServerAccounting getRegionServerAccounting()
RegionServerRpcQuotaManager getRegionServerRpcQuotaManager()
@ -4161,8 +4161,8 @@ void addToOnlineRegions(Region region)
boolean removeFromOnlineRegions(final Region r, ServerName destination)
Also 3 methods name have been changed
List\<Region\> getOnlineRegions(TableName tableName) -\> List\<Region\> getRegions(TableName tableName)
List\<Region\> getOnlineRegions() -\> List\<Region\> getRegions()
List\<Region\> getOnlineRegions(TableName tableName) -\> List\<Region\> getRegions(TableName tableName)
List\<Region\> getOnlineRegions() -\> List\<Region\> getRegions()
Region getFromOnlineRegions(final String encodedRegionName) -\> Region getRegion(final String encodedRegionName)
@ -4225,7 +4225,7 @@ void closeReader(boolean evictOnClose) throws IOException;
void markCompactedAway();
void deleteReader() throws IOException;
Notice that these methods are still available in HStoreFile.
Notice that these methods are still available in HStoreFile.
And the return value of getFirstKey and getLastKey are changed from Cell to Optional\<Cell\> to better indicate that they may not be available.
@ -4528,7 +4528,7 @@ Replaces hbase-shaded-server-\<version\>.jar with hbase-shaded-mapreduce-\<versi
* [HBASE-15607](https://issues.apache.org/jira/browse/HBASE-15607) | *Blocker* | **Remove PB references from Admin for 2.0**
All the references to Protos in Admin.java have been removed and replaced with respective POJO classes.
All the references to Protos in Admin.java have been removed and replaced with respective POJO classes.
The references to Protos that were removed are
AdminProtos.GetRegionInfoResponse,
HBaseProtos.SnapshotDescription, HBaseProtos.SnapshotDescription.Type,
@ -4656,7 +4656,7 @@ This patch removed the storefile\_index\_size\_MB in protobuf. It will cause the
* [HBASE-18519](https://issues.apache.org/jira/browse/HBASE-18519) | *Major* | **Use builder pattern to create cell**
Introduce the CellBuilder helper.
1) Using CellBuilderFactory to get CellBuilder for creating cell with row,
1) Using CellBuilderFactory to get CellBuilder for creating cell with row,
column, qualifier, type, and value.
2) For internal use, the ExtendedCellBuilder, which is created by ExtendedCellBuilderFactory, is able to build cell with extra fields - sequence id and tags -
@ -4840,7 +4840,7 @@ This patch exposes configuration for Bucketcache. These configs are very similar
* [HBASE-18271](https://issues.apache.org/jira/browse/HBASE-18271) | *Blocker* | **Shade netty**
Depend on hbase-thirdparty for our netty instead of directly relying on netty-all. netty is relocated in hbase-thirdparty from io.netty to org.apache.hadoop.hbase.shaded.io.netty. One kink is that netty bundles an .so. Its files also are relocated. So netty can find the .so content, need to specify on command-line a system property telling netty about the shading.
Depend on hbase-thirdparty for our netty instead of directly relying on netty-all. netty is relocated in hbase-thirdparty from io.netty to org.apache.hadoop.hbase.shaded.io.netty. One kink is that netty bundles an .so. Its files also are relocated. So netty can find the .so content, need to specify on command-line a system property telling netty about the shading.
The .so trick is from
https://stackoverflow.com/questions/33825743/rename-files-inside-a-jar-using-some-maven-plugin
@ -4877,7 +4877,7 @@ Note that, the constructor way to new a ClusterStatus will be no longer support
* [HBASE-18551](https://issues.apache.org/jira/browse/HBASE-18551) | *Major* | **[AMv2] UnassignProcedure and crashed regionservers**
Unassign will not proceed if it is unable to talk to the remote server. Now it will expire the server it is unable to communicate with and then wait until it is signaled by ServerCrashProcedure that the server's logs have been split. Only then will judge the unassign successful.
Unassign will not proceed if it is unable to talk to the remote server. Now it will expire the server it is unable to communicate with and then wait until it is signaled by ServerCrashProcedure that the server's logs have been split. Only then will judge the unassign successful.
We do this because a subsequent assign lacking the crashed server context might open a region w/o first splitting logs.
@ -4948,7 +4948,7 @@ The methods which change to use TableDescriptor/ColumnFamilyDescriptor are shown
+ postCloneSnapshot(ObserverContext\<MasterCoprocessorEnvironment\>,SnapshotDescription,TableDescripto)
+ preRestoreSnapshot(ObserverContext\<MasterCoprocessorEnvironment,SnapshotDescription,TableDescriptor)
+ postRestoreSnapshot(ObserverContext\<MasterCoprocessorEnvironment,SnapshotDescription,TableDescriptor)
+ preGetTableDescriptors(ObserverContext\<MasterCoprocessorEnvironment\>,List\<TableName\>, List\<TableDescriptor\>,String)
+ preGetTableDescriptors(ObserverContext\<MasterCoprocessorEnvironment\>,List\<TableName\>, List\<TableDescriptor\>,String)
+ postGetTableDescriptors(ObserverContext\<MasterCoprocessorEnvironment\>,List\<TableName\>, List\<TableDescriptor\>,String)
+ preGetTableNames(ObserverContext\<MasterCoprocessorEnvironment\>,List\<TableDescriptor\>, String)
+ postGetTableNames(ObserverContext\<MasterCoprocessorEnvironment\>,List\<TableDescriptor\>, String)
@ -5063,11 +5063,11 @@ Committed to master and branch-2. Thanks!
In order to use this feature, a user must
1. Register their tables when configuring their job
2. Create a composite key of the tablename and original rowkey to send as the mapper output key.
2. Create a composite key of the tablename and original rowkey to send as the mapper output key.
To register their tables (and configure their job for incremental load into multiple tables), a user must call the static MultiHFileOutputFormat.configureIncrementalLoad function to register the HBase tables that will be ingested into.
To create the composite key, a helper function MultiHFileOutputFormat2.createCompositeKey should be called with the destination tablename and rowkey as arguments, and the result should be output as the mapper key.
To create the composite key, a helper function MultiHFileOutputFormat2.createCompositeKey should be called with the destination tablename and rowkey as arguments, and the result should be output as the mapper key.
Before this JIRA, for HFileOutputFormat2 a configuration for the storage policy was set per Column Family. This was set manually by the user. In this JIRA, this is unchanged when using HFileOutputFormat2. However, when specifically using MultiHFileOutputFormat2, the user now has to manually set the prefix by creating a composite of the table name and the column family. The user can create the new composite value by calling MultiHFileOutputFormat2.createCompositeKey with the tablename and column family as arguments.
@ -5080,9 +5080,9 @@ The configuration parameter "hbase.mapreduce.hfileoutputformat.table.name" is no
* [HBASE-18229](https://issues.apache.org/jira/browse/HBASE-18229) | *Critical* | **create new Async Split API to embrace AM v2**
A new splitRegionAsync() API is added in client. The existing splitRegion() and split() API will call the new API so client does not have to change its code.
A new splitRegionAsync() API is added in client. The existing splitRegion() and split() API will call the new API so client does not have to change its code.
Move HBaseAdmin.splitXXX() logic to master, client splitXXX() API now go to master directly instead of going to RegionServer first.
Move HBaseAdmin.splitXXX() logic to master, client splitXXX() API now go to master directly instead of going to RegionServer first.
Also added splitSync() API
@ -5236,7 +5236,7 @@ Add unit tests for truncate\_preserve
* [HBASE-18240](https://issues.apache.org/jira/browse/HBASE-18240) | *Major* | **Add hbase-thirdparty, a project with hbase utility including an hbase-shaded-thirdparty module with guava, netty, etc.**
Adds a new project, hbase-thirdparty, at https://git-wip-us.apache.org/repos/asf/hbase-thirdparty used by core hbase. GroupID org.apache.hbase.thirdparty. Version 1.0.0.
Adds a new project, hbase-thirdparty, at https://git-wip-us.apache.org/repos/asf/hbase-thirdparty used by core hbase. GroupID org.apache.hbase.thirdparty. Version 1.0.0.
This project packages relocated third-party libraries used by Apache HBase such as protobuf, guava, and netty among others. HBase core depends on it.
@ -5275,9 +5275,9 @@ After HBASE-17110 the bytable strategy for SimpleLoadBalancer will also take ser
Adds clear\_compaction\_queues to the hbase shell.
{code}
Clear compaction queues on a regionserver.
The queue\_name contains short and long.
The queue\_name contains short and long.
short is shortCompactions's queue,long is longCompactions's queue.
Examples:
hbase\> clear\_compaction\_queues 'host187.example.com,60020'
hbase\> clear\_compaction\_queues 'host187.example.com,60020','long'
@ -5367,8 +5367,8 @@ Adds a sort of procedures before submission so system tables are queued first (w
* [HBASE-18008](https://issues.apache.org/jira/browse/HBASE-18008) | *Major* | **Any HColumnDescriptor we give out should be immutable**
1) The HColumnDescriptor got from Admin, AsyncAdmin, and Table is immutable.
2) HColumnDescriptor have been marked as "Deprecated" and user should substituted
1) The HColumnDescriptor got from Admin, AsyncAdmin, and Table is immutable.
2) HColumnDescriptor have been marked as "Deprecated" and user should substituted
ColumnFamilyDescriptor for HColumnDescriptor.
3) ColumnFamilyDescriptor is constructed through ColumnFamilyDescriptorBuilder and it contains all of the read-only methods from HColumnDescriptor
4) The value to which the IS\_MOB/MOB\_THRESHOLD is mapped is stored as String rather than Boolean/Long. The MOB is an new feature to 2.0 so this change should be acceptable
@ -5551,7 +5551,7 @@ The default behavior for abort() method of StateMachineProcedure class is change
* [HBASE-16851](https://issues.apache.org/jira/browse/HBASE-16851) | *Major* | **User-facing documentation for the In-Memory Compaction feature**
Two blog posts on Apache HBase blog: user manual and programmer manual.
Two blog posts on Apache HBase blog: user manual and programmer manual.
Ref. guide draft published: https://docs.google.com/document/d/1Xi1jh\_30NKnjE3wSR-XF5JQixtyT6H\_CdFTaVi78LKw/edit
@ -5564,18 +5564,18 @@ Ref. guide draft published: https://docs.google.com/document/d/1Xi1jh\_30NKnjE3w
CompactingMemStore achieves these gains through smart use of RAM. The algorithm periodically re-organizes the in-memory data in efficient data structures and reduces redundancies. The HBase servers memory footprint therefore periodically expands and contracts. The outcome is longer lifetime of data in memory, less I/O, and overall faster performance. More details about the algorithm and its use appear in the Apache HBase Blog: https://blogs.apache.org/hbase/
How To Use:
The in-memory compaction level can be configured both globally and per column family. The supported levels are none (DefaultMemStore), basic, and eager.
The in-memory compaction level can be configured both globally and per column family. The supported levels are none (DefaultMemStore), basic, and eager.
By default, all tables apply basic in-memory compaction. This global configuration can be overridden in hbase-site.xml, as follows:
By default, all tables apply basic in-memory compaction. This global configuration can be overridden in hbase-site.xml, as follows:
\<property\>
\<name\>hbase.hregion.compacting.memstore.type\</name\>
\<value\>\<none\|basic\|eager\>\</value\>
\</property\>
The level can also be configured in the HBase shell per column family, as follows:
The level can also be configured in the HBase shell per column family, as follows:
create \<tablename\>,
create \<tablename\>,
{NAME =\> \<cfname\>, IN\_MEMORY\_COMPACTION =\> \<NONE\|BASIC\|EAGER\>}
@ -5656,7 +5656,7 @@ MVCCPreAssign is added by HBASE-16698, but pre-assign mvcc is only used in put/d
* [HBASE-16466](https://issues.apache.org/jira/browse/HBASE-16466) | *Major* | **HBase snapshots support in VerifyReplication tool to reduce load on live HBase cluster with large tables**
Support for snapshots in VerifyReplication tool i.e. verifyrep can compare source table snapshot against peer table snapshot which reduces load on RS by reading data from HDFS directly using Snapshot scanners.
Support for snapshots in VerifyReplication tool i.e. verifyrep can compare source table snapshot against peer table snapshot which reduces load on RS by reading data from HDFS directly using Snapshot scanners.
Instead of comparing against live tables whose state changes due to writes and compactions its better to compare HBase snapshots which are immutable in nature.
@ -5827,7 +5827,7 @@ Now small scan and limited scan could also return partial results.
* [HBASE-16014](https://issues.apache.org/jira/browse/HBASE-16014) | *Major* | **Get and Put constructor argument lists are divergent**
Add 2 constructors fot API Get
1. Get(byte[], int, int)
1. Get(byte[], int, int)
2. Get(ByteBuffer)
@ -5986,7 +5986,7 @@ Changes all tests to use the TestName JUnit Rule everywhere rather than hardcode
The HBase cleaner chore process cleans up old WAL files and archived HFiles. Cleaner operation can affect query performance when running heavy workloads, so disable the cleaner during peak hours. The cleaner has the following HBase shell commands:
- cleaner\_chore\_enabled: Queries whether cleaner chore is enabled/ disabled.
- cleaner\_chore\_enabled: Queries whether cleaner chore is enabled/ disabled.
- cleaner\_chore\_run: Manually runs the cleaner to remove files.
- cleaner\_chore\_switch: enables or disables the cleaner and returns the previous state of the cleaner. For example, cleaner-switch true enables the cleaner.
@ -6049,8 +6049,8 @@ Now the scan.setSmall method is deprecated. Consider using scan.setLimit and sca
Mob compaction partition policy can be set by
hbase\> create 't1', {NAME =\> 'f1', IS\_MOB =\> true, MOB\_THRESHOLD =\> 1000000, MOB\_COMPACT\_PARTITION\_POLICY =\> 'weekly'}
or
or
hbase\> alter 't1', {NAME =\> 'f1', IS\_MOB =\> true, MOB\_THRESHOLD =\> 1000000, MOB\_COMPACT\_PARTITION\_POLICY =\> 'monthly'}
@ -6093,16 +6093,16 @@ Fix inability at finding static content post push of parent issue moving us to j
* [HBASE-9774](https://issues.apache.org/jira/browse/HBASE-9774) | *Major* | **HBase native metrics and metric collection for coprocessors**
This issue adds two new modules, hbase-metrics and hbase-metrics-api which define and implement the "new" metric system used internally within HBase. These two modules (and some other code in hbase-hadoop2-compat) module are referred as "HBase metrics framework" which is HBase-specific and independent of any other metrics library (including Hadoop metrics2 and dropwizards metrics).
This issue adds two new modules, hbase-metrics and hbase-metrics-api which define and implement the "new" metric system used internally within HBase. These two modules (and some other code in hbase-hadoop2-compat) module are referred as "HBase metrics framework" which is HBase-specific and independent of any other metrics library (including Hadoop metrics2 and dropwizards metrics).
HBase Metrics API (hbase-metrics-api) contains the interface that HBase exposes internally and to third party code (including coprocessors). It is a thin
abstraction over the actual implementation for backwards compatibility guarantees. The metrics API in this hbase-metrics-api module is inspired by the Dropwizard metrics 3.1 API, however, the API is completely independent.
abstraction over the actual implementation for backwards compatibility guarantees. The metrics API in this hbase-metrics-api module is inspired by the Dropwizard metrics 3.1 API, however, the API is completely independent.
hbase-metrics module contains implementation of the "HBase Metrics API", including MetricRegistry, Counter, Histogram, etc. These are highly concurrent implementations of the Metric interfaces. Metrics in HBase are grouped into different sets (like WAL, RPC, RegionServer, etc). Each group of metrics should be tracked via a MetricRegistry specific to that group.
hbase-metrics module contains implementation of the "HBase Metrics API", including MetricRegistry, Counter, Histogram, etc. These are highly concurrent implementations of the Metric interfaces. Metrics in HBase are grouped into different sets (like WAL, RPC, RegionServer, etc). Each group of metrics should be tracked via a MetricRegistry specific to that group.
Historically, HBase has been using Hadoop's Metrics2 framework [3] for collecting and reporting the metrics internally. However, due to the difficultly of dealing with the Metrics2 framework, HBase is moving away from Hadoop's metrics implementation to its custom implementation. The move will happen incrementally, and during the time, both Hadoop Metrics2-based metrics and hbase-metrics module based classes will be in the source code. All new implementations for metrics SHOULD use the new API and framework.
This jira also introduces the metrics API to coprocessor implementations. Coprocessor writes can export custom metrics using the API and have those collected via metrics2 sinks, as well as exported via JMX in regionserver metrics.
This jira also introduces the metrics API to coprocessor implementations. Coprocessor writes can export custom metrics using the API and have those collected via metrics2 sinks, as well as exported via JMX in regionserver metrics.
More documentation available at: hbase-metrics-api/README.txt
@ -6166,7 +6166,7 @@ Move locking to be procedure (Pv2) rather than zookeeper based. All locking move
* [HBASE-17470](https://issues.apache.org/jira/browse/HBASE-17470) | *Major* | **Remove merge region code from region server**
In 1.x branches, Admin.mergeRegions calls MASTER via dispatchMergingRegions RPC; when executing dispatchMergingRegions RPC, MASTER calls RS via MergeRegions to complete the merge in RS-side.
In 1.x branches, Admin.mergeRegions calls MASTER via dispatchMergingRegions RPC; when executing dispatchMergingRegions RPC, MASTER calls RS via MergeRegions to complete the merge in RS-side.
With HBASE-16119, the merge logic moves to master-side. This JIRA cleans up unused RPCs (dispatchMergingRegions and MergeRegions) , removes dangerous tools such as Merge and HMerge, and deletes unused RegionServer-side merge region logic in 2.0 release.
@ -6336,7 +6336,7 @@ Possible memstore compaction policies are:
Memory compaction policeman be set at the column family level at table creation time:
{code}
create \<tablename\>,
{NAME =\> \<cfname\>,
{NAME =\> \<cfname\>,
IN\_MEMORY\_COMPACTION =\> \<NONE\|BASIC\|EAGER\>}
{code}
or as a property at the global configuration level by setting the property in hbase-site.xml, with BASIC being the default value:
@ -6374,7 +6374,7 @@ Provides ability to restrict table coprocessors based on HDFS path whitelist. (P
* [HBASE-17221](https://issues.apache.org/jira/browse/HBASE-17221) | *Major* | **Abstract out an interface for RpcServer.Call**
Provide an interface RpcCall on the server side.
Provide an interface RpcCall on the server side.
RpcServer.Call now is marked as @InterfaceAudience.Private, and implements the interface RpcCall,
@ -6682,7 +6682,7 @@ Add AsyncConnection, AsyncTable and AsyncTableRegionLocator. Now the AsyncTable
This issue fix three bugs:
1. rpcTimeout configuration not work for one rpc call in AP
2. operationTimeout configuration not work for multi-request (batch, put) in AP
2. operationTimeout configuration not work for multi-request (batch, put) in AP
3. setRpcTimeout and setOperationTimeout in HTable is not worked for AP and BufferedMutator.
@ -6712,7 +6712,7 @@ exist in a cleanly closed file.
If an EOF is detected due to parsing or other errors while there are still unparsed bytes before the end-of-file trailer, we now reset the WAL to the very beginning and attempt a clean read-through. Because we will retry these failures indefinitely, two additional changes are made to help with diagnostics:
\* On each retry attempt, a log message like the below will be emitted at the WARN level:
Processing end of WAL file '{}'. At position {}, which is too far away
from reported file length {}. Restarting WAL reading (see HBASE-15983
for details).
@ -7035,7 +7035,7 @@ Adds logging of region and server. Helpful debugging. Logging now looks like thi
* [HBASE-14743](https://issues.apache.org/jira/browse/HBASE-14743) | *Minor* | **Add metrics around HeapMemoryManager**
A memory metrics reveals situations happened in both MemStores and BlockCache in RegionServer. Through this metrics, users/operators can know
A memory metrics reveals situations happened in both MemStores and BlockCache in RegionServer. Through this metrics, users/operators can know
1). Current size of MemStores and BlockCache in bytes.
2). Occurrence for Memstore minor and major flush. (named unblocked flush and blocked flush respectively, shown in histogram)
3). Dynamic changes in size between MemStores and BlockCache. (with Increase/Decrease as prefix, shown in histogram). And a counter for no changes, named DoNothingCounter.
@ -7062,7 +7062,7 @@ When LocalHBaseCluster is started from the command line the Master would give up
* [HBASE-16052](https://issues.apache.org/jira/browse/HBASE-16052) | *Major* | **Improve HBaseFsck Scalability**
HBASE-16052 improves the performance and scalability of HBaseFsck, especially for large clusters with a small number of large tables.
HBASE-16052 improves the performance and scalability of HBaseFsck, especially for large clusters with a small number of large tables.
Searching for lingering reference files is now a multi-threaded operation. Loading HDFS region directory information is now multi-threaded at the region-level instead of the table-level to maximize concurrency. A performance bug in HBaseFsck that resulted in redundant I/O and RPCs was fixed by introducing a FileStatusFilter that filters FileStatus objects directly.
@ -7078,7 +7078,7 @@ If zk based replication queue is used and useMulti is false, we will schedule a
* [HBASE-3727](https://issues.apache.org/jira/browse/HBASE-3727) | *Minor* | **MultiHFileOutputFormat**
MultiHFileOutputFormat support output of HFiles from multiple tables. It will output directories and hfiles as follow,
MultiHFileOutputFormat support output of HFiles from multiple tables. It will output directories and hfiles as follow,
--table1
--family1
--family2
@ -7102,7 +7102,7 @@ Prior to this change, the integration test clients (IntegrationTest\*) relied on
* [HBASE-13823](https://issues.apache.org/jira/browse/HBASE-13823) | *Major* | **Procedure V2: unnecessaery operations on AssignmentManager#recoverTableInDisablingState() and recoverTableInEnablingState()**
For cluster upgraded from 1.0.x or older releases, master startup would not continue the in-progress enable/disable table process. If orphaned znode with ENABLING/DISABLING state exists in the cluster, run hbck or manually fix the issue.
For cluster upgraded from 1.0.x or older releases, master startup would not continue the in-progress enable/disable table process. If orphaned znode with ENABLING/DISABLING state exists in the cluster, run hbck or manually fix the issue.
For new cluster or cluster upgraded from 1.1.x and newer release, there is no issue to worry about.
@ -7111,9 +7111,9 @@ For new cluster or cluster upgraded from 1.1.x and newer release, there is no is
* [HBASE-16095](https://issues.apache.org/jira/browse/HBASE-16095) | *Major* | **Add priority to TableDescriptor and priority region open thread pool**
Adds a PRIORITY property to the HTableDescriptor. PRIORITY should be in the same range as the RpcScheduler defines it (HConstants.XXX\_QOS).
Adds a PRIORITY property to the HTableDescriptor. PRIORITY should be in the same range as the RpcScheduler defines it (HConstants.XXX\_QOS).
Table priorities are only used for region opening for now. There can be other uses later (like RpcScheduling).
Table priorities are only used for region opening for now. There can be other uses later (like RpcScheduling).
Regions of high priority tables (priority \>= than HIGH\_QOS) are opened from a different thread pool than the regular region open thread pool. However, table priorities are not used as a global order for region assigning or opening.
@ -7129,7 +7129,7 @@ When a replication endpoint is sent a shutdown request by the replication source
* [HBASE-16087](https://issues.apache.org/jira/browse/HBASE-16087) | *Major* | **Replication shouldn't start on a master if if only hosts system tables**
Masters will no longer start any replication threads if they are hosting only system tables.
Masters will no longer start any replication threads if they are hosting only system tables.
In order to change this add something to the config for tables on master that doesn't start with "hbase:" ( Replicating system tables is something that's currently unsupported and can open up security holes, so do this at your own peril)
@ -7138,7 +7138,7 @@ In order to change this add something to the config for tables on master that do
* [HBASE-14548](https://issues.apache.org/jira/browse/HBASE-14548) | *Major* | **Expand how table coprocessor jar and dependency path can be specified**
Allow a directory containing the jars or some wildcards to be specified, such as: hdfs://namenode:port/user/hadoop-user/
Allow a directory containing the jars or some wildcards to be specified, such as: hdfs://namenode:port/user/hadoop-user/
or
hdfs://namenode:port/user/hadoop-user/\*.jar
@ -7185,12 +7185,12 @@ This patch introduces a new infrastructure for creation and maintenance of Maven
NOTE that this patch should introduce two new WARNINGs ("Using platform encoding ... to copy filtered resources") into the hbase install process. These warnings are hard-wired into the maven-archetype-plugin:create-from-project goal. See hbase/hbase-archetypes/README.md, footnote [6] for details.
After applying the patch, see hbase/hbase-archetypes/README.md for details regarding the new archetype infrastructure introduced by this patch. (The README text is also conveniently positioned at the top of the patch itself.)
After applying the patch, see hbase/hbase-archetypes/README.md for details regarding the new archetype infrastructure introduced by this patch. (The README text is also conveniently positioned at the top of the patch itself.)
Here is the opening paragraph of the README.md file:
=================
The hbase-archetypes subproject of hbase provides an infrastructure for creation and maintenance of Maven archetypes pertinent to HBase. Upon deployment to the archetype catalog of the central Maven repository, these archetypes may be used by end-user developers to autogenerate completely configured Maven projects (including fully-functioning sample code) through invocation of the archetype:generate goal of the maven-archetype-plugin.
========
Here is the opening paragraph of the README.md file:
=================
The hbase-archetypes subproject of hbase provides an infrastructure for creation and maintenance of Maven archetypes pertinent to HBase. Upon deployment to the archetype catalog of the central Maven repository, these archetypes may be used by end-user developers to autogenerate completely configured Maven projects (including fully-functioning sample code) through invocation of the archetype:generate goal of the maven-archetype-plugin.
========
The README.md file also contains several paragraphs under the heading, "Notes for contributors and committers to the HBase project", which explains the layout of 'hbase-archetypes', and how archetypes are created and installed into the local Maven repository, ready for deployment to the central Maven repository. It also outlines how new archetypes may be developed and added to the collection in the future.
@ -7249,7 +7249,7 @@ Adds a FifoRpcSchedulerFactory so you can try the FifoRpcScheduler by setting "
* [HBASE-15989](https://issues.apache.org/jira/browse/HBASE-15989) | *Major* | **Remove hbase.online.schema.update.enable**
Removes the "hbase.online.schema.update.enable" property.
Removes the "hbase.online.schema.update.enable" property.
from now, every operation that alter the schema (e.g. modifyTable, addFamily, removeFamily, ...) will use the online schema update. there is no need to disable/enable the table.
@ -7318,12 +7318,12 @@ See http://mail-archives.apache.org/mod\_mbox/hbase-dev/201605.mbox/%3CCAMUu0w-Z
* [HBASE-15228](https://issues.apache.org/jira/browse/HBASE-15228) | *Major* | **Add the methods to RegionObserver to trigger start/complete restoring WALs**
Added two hooks around WAL restore.
Added two hooks around WAL restore.
preReplayWALs(final ObserverContext\<? extends RegionCoprocessorEnvironment\> ctx, HRegionInfo info, Path edits)
and
postReplayWALs(final ObserverContext\<? extends RegionCoprocessorEnvironment\> ctx, HRegionInfo info, Path edits)
postReplayWALs(final ObserverContext\<? extends RegionCoprocessorEnvironment\> ctx, HRegionInfo info, Path edits)
Will be called at start and end of restore of a WAL file.
Will be called at start and end of restore of a WAL file.
The other hook around WAL restore (preWALRestore ) will be called before restore of every entry within the WAL file.
@ -7565,12 +7565,12 @@ No functional change. Added javadoc, comments, and extra trace-level logging to
Use 'hbase.hstore.compaction.date.tiered.window.factory.class' to specify the window implementation you like for date tiered compaction. Now the only and default implementation is org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory.
{code}
\<property\>
\<name\>hbase.hstore.compaction.date.tiered.window.factory.class\</name\>
\<value\>org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory\</value\>
\</property\>
\<property\>
{code}
\<property\>
\<name\>hbase.hstore.compaction.date.tiered.window.factory.class\</name\>
\<value\>org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory\</value\>
\</property\>
\<property\>
{code}
@ -7669,15 +7669,15 @@ With this patch combined with HBASE-15389, when we compact, we can output multip
2. Bulk load files and the old file generated by major compaction before upgrading to DTCP.
This will change the way to enable date tiered compaction.
To turn it on:
To turn it on:
hbase.hstore.engine.class: org.apache.hadoop.hbase.regionserver.DateTieredStoreEngine
With tiered compaction all servers in the cluster will promote windows to higher tier at the same time, so using a compaction throttle is recommended:
hbase.regionserver.throughput.controller:org.apache.hadoop.hbase.regionserver.compactions.PressureAwareCompactionThroughputController
With tiered compaction all servers in the cluster will promote windows to higher tier at the same time, so using a compaction throttle is recommended:
hbase.regionserver.throughput.controller:org.apache.hadoop.hbase.regionserver.compactions.PressureAwareCompactionThroughputController
hbase.hstore.compaction.throughput.higher.bound and hbase.hstore.compaction.throughput.lower.bound need to be set for desired throughput range as uncompressed rates.
Because there will most likely be more store files around, we need to adjust the configuration so that flush won't be blocked and compaction will be properly throttled:
hbase.hstore.blockingStoreFiles: change to 50 if using all default parameters when turning on date tiered compaction. Use 1.5~2 x projected file count if changing the parameters, Projected file count = windows per tier x tier count + incoming window min + files older than max age
Because there will most likely be more store files around, we need to adjust the configuration so that flush won't be blocked and compaction will be properly throttled:
hbase.hstore.blockingStoreFiles: change to 50 if using all default parameters when turning on date tiered compaction. Use 1.5~2 x projected file count if changing the parameters, Projected file count = windows per tier x tier count + incoming window min + files older than max age
Because major compaction is turned on now, we also need to adjust the configuration for max file to compact according to the larger file count:
hbase.hstore.compaction.max: set to the same number as hbase.hstore.blockingStoreFiles.
@ -7774,7 +7774,7 @@ Adds a configuration parameter "hbase.ipc.max.request.size" which defaults to 25
* [HBASE-15412](https://issues.apache.org/jira/browse/HBASE-15412) | *Major* | **Add average region size metric**
Adds a new metric for called "averageRegionSize" that is emitted as a regionserver metric. Metric description:
Adds a new metric for called "averageRegionSize" that is emitted as a regionserver metric. Metric description:
Average region size over the region server including memstore and storefile sizes
@ -7817,7 +7817,7 @@ Fixed an issue in REST server checkAndDelete operation where the remaining cells
* [HBASE-15377](https://issues.apache.org/jira/browse/HBASE-15377) | *Major* | **Per-RS Get metric is time based, per-region metric is size-based**
Per-region metrics related to Get histograms are changed from being response size based into being latency based similar to the per-regionserver metrics of the same name.
Per-region metrics related to Get histograms are changed from being response size based into being latency based similar to the per-regionserver metrics of the same name.
Added GetSize histogram metrics at the per-regionserver and per-region level for the response sizes.
@ -7826,9 +7826,9 @@ Added GetSize histogram metrics at the per-regionserver and per-region level for
* [HBASE-6721](https://issues.apache.org/jira/browse/HBASE-6721) | *Major* | **RegionServer Group based Assignment**
[ADVANCED USERS ONLY] This patch adds a new experimental module hbase-rsgroup. It is an advanced feature for partitioning regionservers into distinctive groups for strict isolation, and should only be used by users who are sophisticated enough to understand the full implications and have a sufficient background in managing HBase clusters.
[ADVANCED USERS ONLY] This patch adds a new experimental module hbase-rsgroup. It is an advanced feature for partitioning regionservers into distinctive groups for strict isolation, and should only be used by users who are sophisticated enough to understand the full implications and have a sufficient background in managing HBase clusters.
RSGroups can be defined and managed with shell commands or corresponding Java APIs. A server can be added to a group with hostname and port pair, and tables can be moved to this group so that only regionservers in the same rsgroup can host the regions of the table. RegionServers and tables can only belong to 1 group at a time. By default, all tables and regionservers belong to the "default" group. System tables can also be put into a group using the regular APIs. A custom balancer implementation tracks assignments per rsgroup and makes sure to move regions to the relevant regionservers in that group. The group information is stored in a regular HBase table, and a zookeeper-based read-only cache is used at the cluster bootstrap time.
RSGroups can be defined and managed with shell commands or corresponding Java APIs. A server can be added to a group with hostname and port pair, and tables can be moved to this group so that only regionservers in the same rsgroup can host the regions of the table. RegionServers and tables can only belong to 1 group at a time. By default, all tables and regionservers belong to the "default" group. System tables can also be put into a group using the regular APIs. A custom balancer implementation tracks assignments per rsgroup and makes sure to move regions to the relevant regionservers in that group. The group information is stored in a regular HBase table, and a zookeeper-based read-only cache is used at the cluster bootstrap time.
To enable, add the following to your hbase-site.xml and restart your Master:
@ -7857,7 +7857,7 @@ This adds a group to the 'hbase:rsgroup' system table. Add a server (hostname +
* [HBASE-15435](https://issues.apache.org/jira/browse/HBASE-15435) | *Major* | **Add WAL (in bytes) written metric**
Adds a new metric named "writtenBytes" as a per-regionserver metric. Metric Description:
Adds a new metric named "writtenBytes" as a per-regionserver metric. Metric Description:
Size (in bytes) of the data written to the WAL.
@ -7908,7 +7908,7 @@ on branch-1, branch-1.2 and branch 1.3 we now check if the exception is meta-cle
* [HBASE-15376](https://issues.apache.org/jira/browse/HBASE-15376) | *Major* | **ScanNext metric is size-based while every other per-operation metric is time based**
Removed ScanNext histogram metrics as regionserver level and per-region level metrics since the semantics is not compatible with other similar metrics (size histogram vs latency histogram).
Removed ScanNext histogram metrics as regionserver level and per-region level metrics since the semantics is not compatible with other similar metrics (size histogram vs latency histogram).
Instead, this patch adds ScanTime and ScanSize histogram metrics at the regionserver and per-region level.
@ -7931,7 +7931,7 @@ Previously RPC request scheduler in HBase had 2 modes in could operate in:
This patch adds new type of scheduler to HBase, based on the research around controlled delay (CoDel) algorithm [1], used in networking to combat bufferbloat, as well as some analysis on generalizing it to generic request queues [2]. The purpose of that work is to prevent long standing call queues caused by discrepancy between request rate and available throughput, caused by kernel/disk IO/networking stalls.
New RPC scheduler could be enabled by setting hbase.ipc.server.callqueue.type=codel in configuration. Several additional params allow to configure algorithm behavior -
New RPC scheduler could be enabled by setting hbase.ipc.server.callqueue.type=codel in configuration. Several additional params allow to configure algorithm behavior -
hbase.ipc.server.callqueue.codel.target.delay
hbase.ipc.server.callqueue.codel.interval
@ -8105,7 +8105,7 @@ Removed IncrementPerformanceTest. It is not as configurable as the additions mad
* [HBASE-15218](https://issues.apache.org/jira/browse/HBASE-15218) | *Blocker* | **On RS crash and replay of WAL, loosing all Tags in Cells**
This issue fixes
This issue fixes
- In case of normal WAL (Not encrypted) we were loosing all cell tags on WAL replay after an RS crash
- In case of encrypted WAL we were not even persisting Cell tags in WAL. Tags from all unflushed (to HFile) Cells will get lost even after WAL replay recovery is done.
@ -8154,13 +8154,13 @@ If you are using co processors and refer the Cells in the read results, DO NOT s
* [HBASE-15145](https://issues.apache.org/jira/browse/HBASE-15145) | *Major* | **HBCK and Replication should authenticate to zookepeer using server principal**
Added a new command line argument: --auth-as-server to enable authenticating to ZooKeeper as the HBase Server principal. This is required for secure clusters for doing replication operations like add\_peer, list\_peers, etc until HBASE-11392 is fixed. This advanced option can also be used for manually fixing secure znodes.
Added a new command line argument: --auth-as-server to enable authenticating to ZooKeeper as the HBase Server principal. This is required for secure clusters for doing replication operations like add\_peer, list\_peers, etc until HBASE-11392 is fixed. This advanced option can also be used for manually fixing secure znodes.
Commands can now be invoked like:
hbase --auth-as-server shell
hbase --auth-as-server zkcli
Commands can now be invoked like:
hbase --auth-as-server shell
hbase --auth-as-server zkcli
HBCK in secure setup also needs to authenticate to ZK using servers principals.This is turned on by default (no need to pass additional argument).
HBCK in secure setup also needs to authenticate to ZK using servers principals.This is turned on by default (no need to pass additional argument).
When authenticating as server, HBASE\_SERVER\_JAAS\_OPTS is concatenated to HBASE\_OPTS if defined in hbase-env.sh. Otherwise, HBASE\_REGIONSERVER\_OPTS is concatenated.
@ -8209,7 +8209,7 @@ The \`hbase version\` command now outputs directly to stdout rather than to a lo
* [HBASE-15027](https://issues.apache.org/jira/browse/HBASE-15027) | *Major* | **Refactor the way the CompactedHFileDischarger threads are created**
The property 'hbase.hfile.compactions.discharger.interval' has been renamed to 'hbase.hfile.compaction.discharger.interval' that describes the interval after which the compaction discharger chore service should run.
The property 'hbase.hfile.compaction.discharger.thread.count' describes the thread count that does the compaction discharge work.
The property 'hbase.hfile.compaction.discharger.thread.count' describes the thread count that does the compaction discharge work.
The CompactedHFilesDischarger is a chore service now started as part of the RegionServer and this chore service iterates over all the onlineRegions in that RS and uses the RegionServer's executor service to launch a set of threads that does this job of compaction files clean up.
@ -8217,8 +8217,8 @@ The CompactedHFilesDischarger is a chore service now started as part of the Regi
* [HBASE-14468](https://issues.apache.org/jira/browse/HBASE-14468) | *Major* | **Compaction improvements: FIFO compaction policy**
FIFO compaction policy selects only files which have all cells expired. The column family MUST have non-default TTL.
Essentially, FIFO compactor does only one job: collects expired store files.
FIFO compaction policy selects only files which have all cells expired. The column family MUST have non-default TTL.
Essentially, FIFO compactor does only one job: collects expired store files.
Because we do not do any real compaction, we do not use CPU and IO (disk and network), we do not evict hot data from a block cache. The result: improved throughput and latency both write and read.
See: https://github.com/facebook/rocksdb/wiki/FIFO-compaction-style
@ -8281,7 +8281,7 @@ All clients before 1.2.0 will not get this multi request chunking based upon blo
* [HBASE-14951](https://issues.apache.org/jira/browse/HBASE-14951) | *Minor* | **Make hbase.regionserver.maxlogs obsolete**
Rolling WAL events across a cluster can be highly correlated, hence flushing memstores, hence triggering minor compactions, that can be promoted to major ones. These events are highly correlated in time if there is a balanced write-load on the regions in a table. Default value for maximum WAL files (\* hbase.regionserver.maxlogs\*), which controls WAL rolling events - 32 is too small for many modern deployments.
Rolling WAL events across a cluster can be highly correlated, hence flushing memstores, hence triggering minor compactions, that can be promoted to major ones. These events are highly correlated in time if there is a balanced write-load on the regions in a table. Default value for maximum WAL files (\* hbase.regionserver.maxlogs\*), which controls WAL rolling events - 32 is too small for many modern deployments.
Now we calculate this value dynamically (if not defined by user), using the following formula:
maxLogs = Math.max( 32, HBASE\_HEAP\_SIZE \* memstoreRatio \* 2/ LogRollSize), where
@ -8289,7 +8289,7 @@ maxLogs = Math.max( 32, HBASE\_HEAP\_SIZE \* memstoreRatio \* 2/ LogRollSize), w
memstoreRatio is \*hbase.regionserver.global.memstore.size\*
LogRollSize is maximum WAL file size (default 0.95 \* HDFS block size)
We need to make sure that we avoid fully or minimize events when RS has to flush memstores prematurely only because it reached artificial limit of hbase.regionserver.maxlogs, this is why we put this 2 x multiplier in equation, this gives us maximum WAL capacity of 2 x RS memstore-size.
We need to make sure that we avoid fully or minimize events when RS has to flush memstores prematurely only because it reached artificial limit of hbase.regionserver.maxlogs, this is why we put this 2 x multiplier in equation, this gives us maximum WAL capacity of 2 x RS memstore-size.
Runaway WAL files.
@ -8321,7 +8321,7 @@ Setting it to false ( the default ) will help ensure a more even distribution of
* [HBASE-14534](https://issues.apache.org/jira/browse/HBASE-14534) | *Minor* | **Bump yammer/coda/dropwizard metrics dependency version**
Updated yammer metrics to version 3.1.2 (now it's been renamed to dropwizard). API has changed quite a bit, consult https://dropwizard.github.io/metrics/3.1.0/manual/core/ for additional information.
Updated yammer metrics to version 3.1.2 (now it's been renamed to dropwizard). API has changed quite a bit, consult https://dropwizard.github.io/metrics/3.1.0/manual/core/ for additional information.
Note that among other things, in yammer 2.2.0 histograms were by default created in non-biased mode (uniform sampling), while in 3.1.0 histograms created via MetricsRegistry.histogram(...) are by default exponentially decayed. This shouldn't affect end users, though.
@ -8375,7 +8375,7 @@ Following are the additional configurations added for this enhancement,
For example: If source cluster FS client configurations are copied in peer cluster under directory /home/user/dc1/ then hbase.replication.cluster.id should be configured as dc1 and hbase.replication.conf.dir as /home/user
Note:
Note:
a. Any modification to source cluster FS client configuration files in peer cluster side replication configuration directory then it needs to restart all its peer(s) cluster RS with default hbase.replication.source.fs.conf.provider.
b. Only 'xml' type files will be loaded by the default hbase.replication.source.fs.conf.provider.
@ -8573,7 +8573,7 @@ This patch adds shell support for region normalizer (see HBASE-13103).
- 'normalizer\_switch' allows user to turn normalizer on and off
- 'normalize' runs region normalizer if it's turned on.
Also 'alter' command has been extended to allow user to enable/disable region normalization per table (disabled by default). Use it as
Also 'alter' command has been extended to allow user to enable/disable region normalization per table (disabled by default). Use it as
alter 'testtable', {NORMALIZATION\_MODE =\> 'true'}
@ -8871,14 +8871,14 @@ For more details on how to use the feature please consult the HBase Reference Gu
Removed Table#getRowOrBefore, Region#getClosestRowBefore, Store#getRowKeyAtOrBefore, RemoteHTable#getRowOrBefore apis and Thrift support for getRowOrBefore.
Also removed two coprocessor hooks preGetClosestRowBefore and postGetClosestRowBefore.
User using this api can instead use reverse scan something like below,
{code}
Scan scan = new Scan(row);
scan.setSmall(true);
scan.setCaching(1);
scan.setReversed(true);
scan.addFamily(family);
{code}
User using this api can instead use reverse scan something like below,
{code}
Scan scan = new Scan(row);
scan.setSmall(true);
scan.setCaching(1);
scan.setReversed(true);
scan.addFamily(family);
{code}
pass this scan object to the scanner and retrieve the first Result from scanner output.
@ -8894,7 +8894,7 @@ Changes parameters to filterColumn so takes a Cell rather than a byte [].
hbase-client-1.2.7-SNAPSHOT.jar, ColumnPrefixFilter.class
package org.apache.hadoop.hbase.filter
ColumnPrefixFilter.filterColumn ( byte[ ] buffer, int qualifierOffset, int qualifierLength ) : Filter.ReturnCode
ColumnPrefixFilter.filterColumn ( byte[ ] buffer, int qualifierOffset, int qualifierLength ) : Filter.ReturnCode
org/apache/hadoop/hbase/filter/ColumnPrefixFilter.filterColumn:([BII)Lorg/apache/hadoop/hbase/filter/Filter$ReturnCode;
Ditto for filterColumnValue in SingleColumnValueFilter. Takes a Cell instead of byte array.
@ -9088,7 +9088,7 @@ hbase-shaded-client and hbase-shaded-server modules will not build the actual ja
* [HBASE-13754](https://issues.apache.org/jira/browse/HBASE-13754) | *Major* | **Allow non KeyValue Cell types also to oswrite**
This jira has removed the already deprecated method
This jira has removed the already deprecated method
KeyValue#oswrite(final KeyValue kv, final OutputStream out)
@ -9128,11 +9128,11 @@ Purge support for parsing zookeepers zoo.cfg deprecated since hbase-0.96.0
MOTIVATION
A pipelined scan API is introduced for speeding up applications that combine massive data traversal with compute-intensive processing. Traditional HBase scans save network trips through prefetching the data to the client side cache. However, they prefetch synchronously: the fetch request to regionserver is invoked only when the entire cache is consumed. This leads to a stop-and-wait access pattern, in which the client stalls until the next chunk of data is fetched. Applications that do significant processing can benefit from background data prefetching, which eliminates this bottleneck. The pipelined scan implementation overlaps the cache population at the client side with application processing. Namely, it issues a new scan RPC when the iteration retrieves 50% of the cache. If the application processing (that is, the time between invocations of next()) is substantial, the new chunk of data will be available before the previous one is exhausted, and the client will not experience any delay. Ideally, the prefetch and the processing times should be balanced.
A pipelined scan API is introduced for speeding up applications that combine massive data traversal with compute-intensive processing. Traditional HBase scans save network trips through prefetching the data to the client side cache. However, they prefetch synchronously: the fetch request to regionserver is invoked only when the entire cache is consumed. This leads to a stop-and-wait access pattern, in which the client stalls until the next chunk of data is fetched. Applications that do significant processing can benefit from background data prefetching, which eliminates this bottleneck. The pipelined scan implementation overlaps the cache population at the client side with application processing. Namely, it issues a new scan RPC when the iteration retrieves 50% of the cache. If the application processing (that is, the time between invocations of next()) is substantial, the new chunk of data will be available before the previous one is exhausted, and the client will not experience any delay. Ideally, the prefetch and the processing times should be balanced.
API AND CONFIGURATION
Asynchronous scanning can be configured either globally for all tables and scans, or on per-scan basis via a new Scan class API.
Asynchronous scanning can be configured either globally for all tables and scans, or on per-scan basis via a new Scan class API.
Configuration in hbase-site.xml: hbase.client.scanner.async.prefetch, default false:
@ -9175,8 +9175,8 @@ Introduces a new config hbase.fs.tmp.dir which is a directory in HDFS (or defaul
* [HBASE-10800](https://issues.apache.org/jira/browse/HBASE-10800) | *Major* | **Use CellComparator instead of KVComparator**
From 2.0 branch onwards KVComparator and its subclasses MetaComparator, RawBytesComparator are all deprecated.
All the comparators are moved to CellComparator. MetaCellComparator, a subclass of CellComparator, will be used to compare hbase:meta cells.
From 2.0 branch onwards KVComparator and its subclasses MetaComparator, RawBytesComparator are all deprecated.
All the comparators are moved to CellComparator. MetaCellComparator, a subclass of CellComparator, will be used to compare hbase:meta cells.
Previously exposed static instances KeyValue.COMPARATOR, KeyValue.META\_COMPARATOR and KeyValue.RAW\_COMPARATOR are deprecated instead use CellComparator.COMPARATOR and CellComparator.META\_COMPARATOR.
Also note that there will be no RawBytesComparator. Where ever we need to compare raw bytes use Bytes.BYTES\_RAWCOMPARATOR.
CellComparator will always operate on cells and its components, abstracting the fact that a cell can be backed by a single byte[] as opposed to how KVComparators were working.
@ -9194,7 +9194,7 @@ Adds a renewLease call to ClientScanner
* [HBASE-13564](https://issues.apache.org/jira/browse/HBASE-13564) | *Major* | **Master MBeans are not published**
To use the coprocessor-based JMX implementation provided by HBase for Master.
Add below property in hbase-site.xml file:
Add below property in hbase-site.xml file:
\<property\>
\<name\>hbase.coprocessor.master.classes\</name\>
@ -9310,7 +9310,7 @@ Compose thrift exception text from the text of the entire cause chain of the und
* [HBASE-13275](https://issues.apache.org/jira/browse/HBASE-13275) | *Major* | **Setting hbase.security.authorization to false does not disable authorization**
Prior to this change the configuration setting 'hbase.security.authorization' had no effect if security coprocessor were installed. The act of installing the security coprocessors was assumed to indicate active authorizaton was desired and required. Now it is possible to install the security coprocessors yet have them operate in a passive state with active authorization disabled by setting 'hbase.security.authorization' to false. This can be useful but is probably not what you want. For more information, consult the Security section of the HBase online manual.
Prior to this change the configuration setting 'hbase.security.authorization' had no effect if security coprocessor were installed. The act of installing the security coprocessors was assumed to indicate active authorizaton was desired and required. Now it is possible to install the security coprocessors yet have them operate in a passive state with active authorization disabled by setting 'hbase.security.authorization' to false. This can be useful but is probably not what you want. For more information, consult the Security section of the HBase online manual.
'hbase.security.authorization' defaults to true for backwards comptatible behavior.
@ -9346,15 +9346,15 @@ Use hbase.client.scanner.max.result.size instead to enforce practical chunk size
Results returned from RPC calls may now be returned as partials
When is a Result marked as a partial?
When is a Result marked as a partial?
When the server must stop the scan because the max size limit has been reached. Means that the LAST Result returned within the ScanResult's Result array may be marked as a partial if the scan's max size limit caused it to stop in the middle of a row.
Incompatible Change: The return type of InternalScanners#next and RegionScanners#nextRaw has been changed to NextState from boolean
The previous boolean return value can be accessed via NextState#hasMoreValues()
Provides more context as to what happened inside the scanner
Scan caching default has been changed to Integer.Max\_Value
This value works together with the new maxResultSize value from HBASE-12976 (defaults to 2MB)
Scan caching default has been changed to Integer.Max\_Value
This value works together with the new maxResultSize value from HBASE-12976 (defaults to 2MB)
Results returned from server on basis of size rather than number of rows
Provides better use of network since row size varies amongst tables
@ -9672,14 +9672,14 @@ This client is on by default in master branch (2.0 hbase). It is off in branch-1
Namespace auditor provides basic quota support for namespaces in terms of number of tables and number of regions. In order to use namespace quotas, quota support must be enabled by setting
"hbase.quota.enabled" property to true in hbase-site.xml file.
The users can add quota information to namespace, while creating new namespaces or by altering existing ones.
The users can add quota information to namespace, while creating new namespaces or by altering existing ones.
Examples:
1. create\_namespace 'ns1', {'hbase.namespace.quota.maxregions'=\>'10'}
2. create\_namespace 'ns2', {'hbase.namespace.quota.maxtables'=\>'2','hbase.namespace.quota.maxregions'=\>'5'}
3. alter\_namespace 'ns3', {METHOD =\> 'set', 'hbase.namespace.quota.maxtables'=\>'5','hbase.namespace.quota.maxregions'=\>'25'}
The quotas can be modified/added to namespace at any point of time. To remove quotas, the following command can be used:
The quotas can be modified/added to namespace at any point of time. To remove quotas, the following command can be used:
alter\_namespace 'ns3', {METHOD =\> 'unset', NAME =\> 'hbase.namespace.quota.maxtables'}
alter\_namespace 'ns3', {METHOD =\> 'unset', NAME =\> 'hbase.namespace.quota.maxregions'}
@ -9839,7 +9839,7 @@ NavigableMap\<byte [], List\<KeyValue\>\> getFamilyMap()
* [HBASE-12084](https://issues.apache.org/jira/browse/HBASE-12084) | *Major* | **Remove deprecated APIs from Result**
The below KeyValue based APIs are removed from Result
KeyValue[] raw()
KeyValue[] raw()
List\<KeyValue\> list()
List\<KeyValue\> getColumn(byte [] family, byte [] qualifier)
KeyValue getColumnLatest(byte [] family, byte [] qualifier)
@ -9854,7 +9854,7 @@ Cell getColumnLatestCell(byte [] family, int foffset, int flength, byte [] quali
respectively
Also the constructors which were taking KeyValues also removed
Result(KeyValue [] cells)
Result(KeyValue [] cells)
Result(List\<KeyValue\> kvs)
@ -9865,7 +9865,7 @@ Result(List\<KeyValue\> kvs)
The following APIs are removed from Filter
KeyValue transform(KeyValue)
KeyValue getNextKeyHint(KeyValue)
and replaced with
and replaced with
Cell transformCell(Cell)
Cell getNextCellHint(Cell)
respectively.
@ -10012,6 +10012,3 @@ To enable zoo.cfg reading, for which support may be removed in a future release,
properties from a zoo.cfg file has been deprecated.
\</description\>
\</property\>

View File

@ -17,7 +17,7 @@
# * See the License for the specific language governing permissions and
# * limitations under the License.
# */
#
#
usage="Usage: considerAsDead.sh --hostname serverName"
@ -50,12 +50,12 @@ do
rs_parts=(${rs//,/ })
hostname=${rs_parts[0]}
echo $deadhost
echo $hostname
echo $hostname
if [ "$deadhost" == "$hostname" ]; then
znode="$zkrs/$rs"
echo "ZNode Deleting:" $znode
$bin/hbase zkcli delete $znode > /dev/null 2>&1
sleep 1
ssh $HBASE_SSH_OPTS $hostname $remote_cmd 2>&1 | sed "s/^/$hostname: /"
fi
ssh $HBASE_SSH_OPTS $hostname $remote_cmd 2>&1 | sed "s/^/$hostname: /"
fi
done

View File

@ -74,7 +74,7 @@ check_for_znodes() {
znodes=`"$bin"/hbase zkcli ls $zparent/$zchild 2>&1 | tail -1 | sed "s/\[//" | sed "s/\]//"`
if [ "$znodes" != "" ]; then
echo -n "ZNode(s) [${znodes}] of $command are not expired. Exiting without cleaning hbase data."
echo #force a newline
echo #force a newline
exit 1;
else
echo -n "All ZNode(s) of $command are expired."
@ -99,7 +99,7 @@ execute_clean_acls() {
clean_up() {
case $1 in
--cleanZk)
--cleanZk)
execute_zk_command "deleteall ${zparent}";
;;
--cleanHdfs)
@ -120,7 +120,7 @@ clean_up() {
;;
*)
;;
esac
esac
}
check_znode_exists() {

View File

@ -103,7 +103,7 @@ do
break
fi
done
# Allow alternate hbase conf dir location.
HBASE_CONF_DIR="${HBASE_CONF_DIR:-$HBASE_HOME/conf}"
# List of hbase regions servers.
@ -162,7 +162,7 @@ fi
# memory usage to explode. Tune the variable down to prevent vmem explosion.
export MALLOC_ARENA_MAX=${MALLOC_ARENA_MAX:-4}
# Now having JAVA_HOME defined is required
# Now having JAVA_HOME defined is required
if [ -z "$JAVA_HOME" ]; then
cat 1>&2 <<EOF
+======================================================================+

View File

@ -17,7 +17,7 @@
# * See the License for the specific language governing permissions and
# * limitations under the License.
# */
#
#
# Run a shell command on all backup master hosts.
#
# Environment Variables
@ -45,7 +45,7 @@ bin=`cd "$bin">/dev/null; pwd`
. "$bin"/hbase-config.sh
# If the master backup file is specified in the command line,
# then it takes precedence over the definition in
# then it takes precedence over the definition in
# hbase-env.sh. Save it here.
HOSTLIST=$HBASE_BACKUP_MASTERS
@ -69,6 +69,6 @@ if [ -f $HOSTLIST ]; then
sleep $HBASE_SLAVE_SLEEP
fi
done
fi
fi
wait

View File

@ -17,7 +17,7 @@
# * See the License for the specific language governing permissions and
# * limitations under the License.
# */
#
#
# Run a shell command on all regionserver hosts.
#
# Environment Variables
@ -45,7 +45,7 @@ bin=`cd "$bin">/dev/null; pwd`
. "$bin"/hbase-config.sh
# If the regionservers file is specified in the command line,
# then it takes precedence over the definition in
# then it takes precedence over the definition in
# hbase-env.sh. Save it here.
HOSTLIST=$HBASE_REGIONSERVERS

View File

@ -52,7 +52,7 @@ fi
export HBASE_LOG_PREFIX=hbase-$HBASE_IDENT_STRING-master-$HOSTNAME
export HBASE_LOGFILE=$HBASE_LOG_PREFIX.log
logout=$HBASE_LOG_DIR/$HBASE_LOG_PREFIX.out
logout=$HBASE_LOG_DIR/$HBASE_LOG_PREFIX.out
loglog="${HBASE_LOG_DIR}/${HBASE_LOGFILE}"
pid=${HBASE_PID_DIR:-/tmp}/hbase-$HBASE_IDENT_STRING-master.pid
@ -74,7 +74,7 @@ fi
# distributed == false means that the HMaster will kill ZK when it exits
# HBASE-6504 - only take the first line of the output in case verbose gc is on
distMode=`$bin/hbase --config "$HBASE_CONF_DIR" org.apache.hadoop.hbase.util.HBaseConfTool hbase.cluster.distributed | head -n 1`
if [ "$distMode" == 'true' ]
if [ "$distMode" == 'true' ]
then
"$bin"/hbase-daemons.sh --config "${HBASE_CONF_DIR}" stop zookeeper
fi

View File

@ -68,7 +68,7 @@ while [ $# -ne 0 ]; do
-h|--help)
print_usage ;;
--kill)
IS_KILL=1
IS_KILL=1
cmd_specified ;;
--show)
IS_SHOW=1
@ -106,5 +106,3 @@ else
echo "No command specified" >&2
exit 1
fi

View File

@ -17,7 +17,7 @@
# * See the License for the specific language governing permissions and
# * limitations under the License.
# */
#
#
# Run a shell command on all zookeeper hosts.
#
# Environment Variables

View File

@ -33,7 +33,7 @@
# The maximum amount of heap to use. Default is left to JVM default.
# export HBASE_HEAPSIZE=1G
# Uncomment below if you intend to use off heap cache. For example, to allocate 8G of
# Uncomment below if you intend to use off heap cache. For example, to allocate 8G of
# offheap, set the value to "8G".
# export HBASE_OFFHEAPSIZE=1G
@ -70,7 +70,7 @@
# export CLIENT_GC_OPTS="-verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps -Xloggc:<FILE-PATH> -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=1 -XX:GCLogFileSize=512M"
# See the package documentation for org.apache.hadoop.hbase.io.hfile for other configurations
# needed setting up off-heap block caching.
# needed setting up off-heap block caching.
# Uncomment and adjust to enable JMX exporting
# See jmxremote.password and jmxremote.access in $JRE_HOME/lib/management to configure remote password access.
@ -101,7 +101,7 @@
# Where log files are stored. $HBASE_HOME/logs by default.
# export HBASE_LOG_DIR=${HBASE_HOME}/logs
# Enable remote JDWP debugging of major HBase processes. Meant for Core Developers
# Enable remote JDWP debugging of major HBase processes. Meant for Core Developers
# export HBASE_MASTER_OPTS="$HBASE_MASTER_OPTS -Xdebug -Xrunjdwp:transport=dt_socket,server=y,suspend=n,address=8070"
# export HBASE_REGIONSERVER_OPTS="$HBASE_REGIONSERVER_OPTS -Xdebug -Xrunjdwp:transport=dt_socket,server=y,suspend=n,address=8071"
# export HBASE_THRIFT_OPTS="$HBASE_THRIFT_OPTS -Xdebug -Xrunjdwp:transport=dt_socket,server=y,suspend=n,address=8072"
@ -125,13 +125,13 @@
# Tell HBase whether it should manage it's own instance of ZooKeeper or not.
# export HBASE_MANAGES_ZK=true
# The default log rolling policy is RFA, where the log file is rolled as per the size defined for the
# The default log rolling policy is RFA, where the log file is rolled as per the size defined for the
# RFA appender. Please refer to the log4j2.properties file to see more details on this appender.
# In case one needs to do log rolling on a date change, one should set the environment property
# HBASE_ROOT_LOGGER to "<DESIRED_LOG LEVEL>,DRFA".
# For example:
# export HBASE_ROOT_LOGGER=INFO,DRFA
# The reason for changing default to RFA is to avoid the boundary case of filling out disk space as
# The reason for changing default to RFA is to avoid the boundary case of filling out disk space as
# DRFA doesn't put any cap on the log size. Please refer to HBase-5655 for more context.
# Tell HBase whether it should include Hadoop's lib when start up,

View File

@ -24,20 +24,20 @@
<property>
<name>security.client.protocol.acl</name>
<value>*</value>
<description>ACL for ClientProtocol and AdminProtocol implementations (ie.
<description>ACL for ClientProtocol and AdminProtocol implementations (ie.
clients talking to HRegionServers)
The ACL is a comma-separated list of user and group names. The user and
group list is separated by a blank. For e.g. "alice,bob users,wheel".
The ACL is a comma-separated list of user and group names. The user and
group list is separated by a blank. For e.g. "alice,bob users,wheel".
A special value of "*" means all users are allowed.</description>
</property>
<property>
<name>security.admin.protocol.acl</name>
<value>*</value>
<description>ACL for HMasterInterface protocol implementation (ie.
<description>ACL for HMasterInterface protocol implementation (ie.
clients talking to HMaster for admin operations).
The ACL is a comma-separated list of user and group names. The user and
group list is separated by a blank. For e.g. "alice,bob users,wheel".
The ACL is a comma-separated list of user and group names. The user and
group list is separated by a blank. For e.g. "alice,bob users,wheel".
A special value of "*" means all users are allowed.</description>
</property>
@ -46,8 +46,8 @@
<value>*</value>
<description>ACL for HMasterRegionInterface protocol implementations
(for HRegionServers communicating with HMaster)
The ACL is a comma-separated list of user and group names. The user and
group list is separated by a blank. For e.g. "alice,bob users,wheel".
The ACL is a comma-separated list of user and group names. The user and
group list is separated by a blank. For e.g. "alice,bob users,wheel".
A special value of "*" means all users are allowed.</description>
</property>
</configuration>

View File

@ -38,4 +38,4 @@ ${type_declaration}</template><template autoinsert="true" context="classbody_con
</template><template autoinsert="true" context="catchblock_context" deleted="true" description="Code in new catch blocks" enabled="true" id="org.eclipse.jdt.ui.text.codetemplates.catchblock" name="catchblock">// ${todo} Auto-generated catch block
${exception_var}.printStackTrace();</template><template autoinsert="false" context="methodbody_context" deleted="true" description="Code in created method stubs" enabled="true" id="org.eclipse.jdt.ui.text.codetemplates.methodbody" name="methodbody">// ${todo} Implement ${enclosing_type}.${enclosing_method}
${body_statement}</template><template autoinsert="false" context="constructorbody_context" deleted="true" description="Code in created constructor stubs" enabled="true" id="org.eclipse.jdt.ui.text.codetemplates.constructorbody" name="constructorbody">${body_statement}
// ${todo} Implement constructor</template><template autoinsert="true" context="getterbody_context" deleted="true" description="Code in created getters" enabled="true" id="org.eclipse.jdt.ui.text.codetemplates.getterbody" name="getterbody">return ${field};</template><template autoinsert="true" context="setterbody_context" deleted="true" description="Code in created setters" enabled="true" id="org.eclipse.jdt.ui.text.codetemplates.setterbody" name="setterbody">${field} = ${param};</template></templates>
// ${todo} Implement constructor</template><template autoinsert="true" context="getterbody_context" deleted="true" description="Code in created getters" enabled="true" id="org.eclipse.jdt.ui.text.codetemplates.getterbody" name="getterbody">return ${field};</template><template autoinsert="true" context="setterbody_context" deleted="true" description="Code in created setters" enabled="true" id="org.eclipse.jdt.ui.text.codetemplates.setterbody" name="setterbody">${field} = ${param};</template></templates>

View File

@ -87,7 +87,7 @@ these personalities; a pre-packaged personality can be selected via the
`--project` parameter. There is a provided HBase personality in Yetus, however
the HBase project maintains its own within the HBase source repository. Specify
the path to the personality file using `--personality`. The HBase repository
places this file under `dev-support/hbase-personality.sh`.
places this file under `dev-support/hbase-personality.sh`.
## Docker mode

View File

@ -340,53 +340,53 @@ EOF
echo "writing out example TSV to example.tsv"
cat >"${working_dir}/example.tsv" <<EOF
row1 value8 value8
row1 value8 value8
row3 value2
row2 value9
row10 value1
row2 value9
row10 value1
pow1 value8 value8
pow3 value2
pow3 value2
pow2 value9
pow10 value1
pow10 value1
paw1 value8 value8
paw3 value2
paw2 value9
paw3 value2
paw2 value9
paw10 value1
raw1 value8 value8
raw1 value8 value8
raw3 value2
raw2 value9
raw10 value1
raw2 value9
raw10 value1
aow1 value8 value8
aow3 value2
aow3 value2
aow2 value9
aow10 value1
aow10 value1
aaw1 value8 value8
aaw3 value2
aaw2 value9
aaw3 value2
aaw2 value9
aaw10 value1
how1 value8 value8
how1 value8 value8
how3 value2
how2 value9
how10 value1
how2 value9
how10 value1
zow1 value8 value8
zow3 value2
zow3 value2
zow2 value9
zow10 value1
zow10 value1
zaw1 value8 value8
zaw3 value2
zaw2 value9
zaw3 value2
zaw2 value9
zaw10 value1
haw1 value8 value8
haw1 value8 value8
haw3 value2
haw2 value9
haw10 value1
haw2 value9
haw10 value1
low1 value8 value8
low3 value2
low3 value2
low2 value9
low10 value1
low10 value1
law1 value8 value8
law3 value2
law2 value9
law3 value2
law2 value9
law10 value1
EOF

View File

@ -53,7 +53,7 @@ runAllTests=0
#set to 1 to replay the failed tests. Previous reports are kept in
# fail_ files
replayFailed=0
replayFailed=0
#set to 0 to run all medium & large tests in a single maven operation
# instead of two
@ -85,10 +85,10 @@ mvnCommand="mvn "
function createListDeadProcess {
id=$$
listDeadProcess=""
#list of the process with a ppid of 1
sonProcess=`ps -o pid= --ppid 1`
#then the process with a pgid of the script
for pId in $sonProcess
do
@ -119,32 +119,32 @@ function cleanProcess {
jstack -F -l $pId
kill $pId
echo "kill sent, waiting for 30 seconds"
sleep 30
sleep 30
son=`ps -o pid= --pid $pId | wc -l`
if (test $son -gt 0)
then
then
echo "$pId, java sub process of $id, is still running after a standard kill, using kill -9 now"
echo "Stack for $pId before kill -9:"
jstack -F -l $pId
kill -9 $pId
echo "kill sent, waiting for 2 seconds"
sleep 2
echo "Process $pId killed by kill -9"
sleep 2
echo "Process $pId killed by kill -9"
else
echo "Process $pId killed by standard kill -15"
echo "Process $pId killed by standard kill -15"
fi
else
echo "$pId is not a java process (it's $name), I don't kill it."
fi
done
createListDeadProcess
if (test ${#listDeadProcess} -gt 0)
then
echo "There are still $sonProcess for process $id left."
else
echo "Process $id clean, no son process left"
fi
echo "Process $id clean, no son process left"
fi
}
#count the number of ',' in a string
@ -155,7 +155,7 @@ function countClasses {
count=$((cars - 1))
}
######################################### script
echo "Starting Script. Possible parameters are: runAllTests, replayFailed, nonParallelMaven"
echo "Other parameters are sent to maven"
@ -177,11 +177,11 @@ do
if [ $arg == "nonParallelMaven" ]
then
parallelMaven=0
else
args=$args" $arg"
else
args=$args" $arg"
fi
fi
fi
fi
done
@ -195,24 +195,24 @@ for testFile in $testsList
do
lenPath=$((${#rootTestClassDirectory}))
len=$((${#testFile} - $lenPath - 5)) # len(".java") == 5
shortTestFile=${testFile:lenPath:$len}
shortTestFile=${testFile:lenPath:$len}
testName=$(echo $shortTestFile | sed 's/\//\./g')
#The ',' is used in the grep pattern as we don't want to catch
# partial name
isFlaky=$((`echo $flakyTests | grep "$testName," | wc -l`))
if (test $isFlaky -eq 0)
then
then
isSmall=0
isMedium=0
isLarge=0
# determine the category of the test by greping into the source code
# determine the category of the test by greping into the source code
isMedium=`grep "@Category" $testFile | grep "MediumTests.class" | wc -l`
if (test $isMedium -eq 0)
then
if (test $isMedium -eq 0)
then
isLarge=`grep "@Category" $testFile | grep "LargeTests.class" | wc -l`
if (test $isLarge -eq 0)
then
@ -230,22 +230,22 @@ do
fi
fi
fi
#put the test in the right list
if (test $isSmall -gt 0)
then
if (test $isSmall -gt 0)
then
smallList="$smallList,$testName"
fi
if (test $isMedium -gt 0)
then
fi
if (test $isMedium -gt 0)
then
mediumList="$mediumList,$testName"
fi
if (test $isLarge -gt 0)
then
fi
if (test $isLarge -gt 0)
then
largeList="$largeList,$testName"
fi
fi
fi
fi
done
#remove the ',' at the beginning
@ -285,7 +285,7 @@ do
nextList=2
runList1=$runList1,$testClass
else
nextList=1
nextList=1
runList2=$runList2,$testClass
fi
done
@ -297,27 +297,27 @@ runList2=${runList2:1:${#runList2}}
#now we can run the tests, at last!
echo "Running small tests with one maven instance, in parallel"
#echo Small tests are $smallList
$mvnCommand -P singleJVMTests test -Dtest=$smallList $args
#echo Small tests are $smallList
$mvnCommand -P singleJVMTests test -Dtest=$smallList $args
cleanProcess
exeTime=$(((`date +%s` - $startTime)/60))
echo "Small tests executed after $exeTime minutes"
if (test $parallelMaven -gt 0)
then
then
echo "Running tests with two maven instances in parallel"
$mvnCommand -P localTests test -Dtest=$runList1 $args &
#give some time to the fist process if there is anything to compile
sleep 30
$mvnCommand -P localTests test -Dtest=$runList2 $args
#wait for forked process to finish
wait
cleanProcess
exeTime=$(((`date +%s` - $startTime)/60))
echo "Medium and large (if selected) tests executed after $exeTime minutes"
@ -329,14 +329,14 @@ then
$mvnCommand -P localTests test -Dtest=$flakyTests $args
cleanProcess
exeTime=$(((`date +%s` - $startTime)/60))
echo "Flaky tests executed after $exeTime minutes"
echo "Flaky tests executed after $exeTime minutes"
fi
else
echo "Running tests with a single maven instance, no parallelization"
$mvnCommand -P localTests test -Dtest=$runList1,$runList2,$flakyTests $args
cleanProcess
cleanProcess
exeTime=$(((`date +%s` - $startTime)/60))
echo "Single maven instance tests executed after $exeTime minutes"
echo "Single maven instance tests executed after $exeTime minutes"
fi
#let's analyze the results
@ -360,7 +360,7 @@ for testClass in `echo $fullRunList | sed 's/,/ /g'`
do
reportFile=$surefireReportDirectory/$testClass.txt
outputReportFile=$surefireReportDirectory/$testClass-output.txt
if [ -s $reportFile ];
then
isError=`grep FAILURE $reportFile | wc -l`
@ -368,22 +368,22 @@ do
then
errorList="$errorList,$testClass"
errorCounter=$(($errorCounter + 1))
#let's copy the files if we want to use it later
#let's copy the files if we want to use it later
cp $reportFile "$surefireReportDirectory/fail_$timestamp.$testClass.txt"
if [ -s $reportFile ];
then
cp $outputReportFile "$surefireReportDirectory/fail_$timestamp.$testClass"-output.txt""
fi
else
sucessCounter=$(($sucessCounter +1))
fi
fi
else
#report file does not exist or is empty => the test didn't finish
notFinishedCounter=$(($notFinishedCounter + 1))
notFinishedList="$notFinishedList,$testClass"
fi
fi
done
#list of all tests that failed
@ -411,7 +411,7 @@ echo
echo "Tests in error are: $errorPresList"
echo "Tests that didn't finish are: $notFinishedPresList"
echo
echo "Execution time in minutes: $exeTime"
echo "Execution time in minutes: $exeTime"
echo "##########################"

View File

@ -33,4 +33,3 @@ export PATH=$PATH:$JAVA_HOME/bin:$ANT_HOME/bin:
export MAVEN_OPTS="${MAVEN_OPTS:-"-Xmx3100M -XX:-UsePerfData"}"
ulimit -n

View File

@ -21,7 +21,7 @@
# timestamp suffix. Deploys builds to maven.
#
# To finish, check what was build. If good copy to people.apache.org and
# close the maven repos. Call a vote.
# close the maven repos. Call a vote.
#
# Presumes that dev-support/generate-hadoopX-poms.sh has already been run.
# Presumes your settings.xml all set up so can sign artifacts published to mvn, etc.

View File

@ -17,11 +17,11 @@
# specific language governing permissions and limitations
# under the License.
# This script assumes that your remote is called "origin"
# This script assumes that your remote is called "origin"
# and that your local master branch is called "master".
# I am sure it could be made more abstract but these are the defaults.
# Edit this line to point to your default directory,
# Edit this line to point to your default directory,
# or always pass a directory to the script.
DEFAULT_DIR="EDIT_ME"
@ -69,13 +69,13 @@ function check_git_branch_status {
}
function get_jira_status {
# This function expects as an argument the JIRA ID,
# This function expects as an argument the JIRA ID,
# and returns 99 if resolved and 1 if it couldn't
# get the status.
# The JIRA status looks like this in the HTML:
# The JIRA status looks like this in the HTML:
# span id="resolution-val" class="value resolved" >
# The following is a bit brittle, but filters for lines with
# The following is a bit brittle, but filters for lines with
# resolution-val returns 99 if it's resolved
jira_url='https://issues.apache.org/jira/rest/api/2/issue'
jira_id="$1"
@ -106,7 +106,7 @@ while getopts ":hd:" opt; do
print_usage
exit 0
;;
*)
*)
echo "Invalid argument: $OPTARG" >&2
print_usage >&2
exit 1
@ -135,7 +135,7 @@ get_tracking_branches
for i in "${tracking_branches[@]}"; do
git checkout -q "$i"
# Exit if git status is dirty
check_git_branch_status
check_git_branch_status
git pull -q --rebase
status=$?
if [ "$status" -ne 0 ]; then
@ -169,7 +169,7 @@ for i in "${all_branches[@]}"; do
git checkout -q "$i"
# Exit if git status is dirty
check_git_branch_status
check_git_branch_status
# If this branch has a remote, don't rebase it
# If it has a remote, it has a log with at least one entry
@ -184,7 +184,7 @@ for i in "${all_branches[@]}"; do
echo "Failed. Rolling back. Rebase $i manually."
git rebase --abort
fi
elif [ $status -ne 0 ]; then
elif [ $status -ne 0 ]; then
# If status is 0 it means there is a remote branch, we already took care of it
echo "Unknown error: $?" >&2
exit 1
@ -195,10 +195,10 @@ done
for i in "${deleted_branches[@]}"; do
read -p "$i's JIRA is resolved. Delete? " yn
case $yn in
[Yy])
[Yy])
git branch -D $i
;;
*)
*)
echo "To delete it manually, run git branch -D $deleted_branches"
;;
esac

View File

@ -52,7 +52,7 @@ if $PATCH -p0 -E --dry-run < $PATCH_FILE 2>&1 > $TMP; then
# correct place to put those files.
# NOTE 2014/07/17:
# Temporarily disabling below check since our jenkins boxes seems to be not defaulting to bash
# Temporarily disabling below check since our jenkins boxes seems to be not defaulting to bash
# causing below checks to fail. Once it is fixed, we can revert the commit and enable this again.
# TMP2=/tmp/tmp.paths.2.$$

View File

@ -32,7 +32,7 @@ options:
-h Show this message
-c Run 'mvn clean' before running the tests
-f FILE Run the additional tests listed in the FILE
-u Only run unit tests. Default is to run
-u Only run unit tests. Default is to run
unit and integration tests
-n N Run each test N times. Default = 1.
-s N Print N slowest tests
@ -92,7 +92,7 @@ do
r)
server=1
;;
?)
?)
usage
exit 1
esac
@ -175,7 +175,7 @@ done
# Print a report of the slowest running tests
if [ ! -z $showSlowest ]; then
testNameIdx=0
for (( i = 0; i < ${#test[@]}; i++ ))
do

View File

@ -29,7 +29,7 @@
#set -x
# printenv
### Setup some variables.
### Setup some variables.
bindir=$(dirname $0)
# This key is set by our surefire configuration up in the main pom.xml

View File

@ -1,4 +1,4 @@
<?xml version="1.0"?>
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="https://maven.apache.org/POM/4.0.0" xmlns:xsi="https://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="https://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd">
<!--
/**
@ -21,8 +21,8 @@
-->
<modelVersion>4.0.0</modelVersion>
<parent>
<artifactId>hbase</artifactId>
<groupId>org.apache.hbase</groupId>
<artifactId>hbase</artifactId>
<version>2.5.0-SNAPSHOT</version>
<relativePath>..</relativePath>
</parent>

View File

@ -15,13 +15,11 @@
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.hadoop.hbase.testclassification;
/**
* Tag a test as related to the client. This tests the hbase-client package and all of the client
* tests in hbase-server.
*
* @see org.apache.hadoop.hbase.testclassification.ClientTests
* @see org.apache.hadoop.hbase.testclassification.CoprocessorTests
* @see org.apache.hadoop.hbase.testclassification.FilterTests

View File

@ -15,12 +15,10 @@
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.hadoop.hbase.testclassification;
/**
* Tag a test as related to coprocessors.
*
* @see org.apache.hadoop.hbase.testclassification.ClientTests
* @see org.apache.hadoop.hbase.testclassification.CoprocessorTests
* @see org.apache.hadoop.hbase.testclassification.FilterTests

View File

@ -15,12 +15,10 @@
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.hadoop.hbase.testclassification;
/**
* Tag a test as related to the {@code org.apache.hadoop.hbase.filter} package.
*
* @see org.apache.hadoop.hbase.testclassification.ClientTests
* @see org.apache.hadoop.hbase.testclassification.CoprocessorTests
* @see org.apache.hadoop.hbase.testclassification.FilterTests

View File

@ -15,12 +15,10 @@
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.hadoop.hbase.testclassification;
/**
* Tag a test as failing commonly on public build infrastructure.
*
* @see org.apache.hadoop.hbase.testclassification.ClientTests
* @see org.apache.hadoop.hbase.testclassification.CoprocessorTests
* @see org.apache.hadoop.hbase.testclassification.FilterTests

View File

@ -15,13 +15,11 @@
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.hadoop.hbase.testclassification;
/**
* Tag a test as related to the {@code org.apache.hadoop.hbase.io} package. Things like HFile and
* the like.
*
* @see org.apache.hadoop.hbase.testclassification.ClientTests
* @see org.apache.hadoop.hbase.testclassification.CoprocessorTests
* @see org.apache.hadoop.hbase.testclassification.FilterTests

View File

@ -15,23 +15,20 @@
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.hadoop.hbase.testclassification;
/**
* Tag a test as 'integration/system' test, meaning that the test class has the following
* characteristics:
* <ul>
* <li> Possibly takes hours to complete</li>
* <li> Can be run on a mini cluster or an actual cluster</li>
* <li> Can make changes to the given cluster (starting stopping daemons, etc)</li>
* <li> Should not be run in parallel of other integration tests</li>
* <li>Possibly takes hours to complete</li>
* <li>Can be run on a mini cluster or an actual cluster</li>
* <li>Can make changes to the given cluster (starting stopping daemons, etc)</li>
* <li>Should not be run in parallel of other integration tests</li>
* </ul>
*
* Integration / System tests should have a class name starting with "IntegrationTest", and
* should be annotated with @Category(IntegrationTests.class). Integration tests can be run
* using the IntegrationTestsDriver class or from mvn verify.
*
* Integration / System tests should have a class name starting with "IntegrationTest", and should
* be annotated with @Category(IntegrationTests.class). Integration tests can be run using the
* IntegrationTestsDriver class or from mvn verify.
* @see SmallTests
* @see MediumTests
* @see LargeTests

View File

@ -1,4 +1,4 @@
/**
/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
@ -15,21 +15,19 @@
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.hadoop.hbase.testclassification;
/**
* Tagging a test as 'large', means that the test class has the following characteristics:
* <ul>
* <li>it can executed in an isolated JVM (Tests can however be executed in different JVM on the
* same machine simultaneously so be careful two concurrent tests end up fighting over ports
* or other singular resources).</li>
* <li>ideally, the whole large test-suite/class, no matter how many or how few test methods it
* has, will run in last less than three minutes</li>
* <li>No large test can take longer than ten minutes; it will be killed. See 'Integeration Tests'
* if you need to run tests longer than this.</li>
* <li>it can executed in an isolated JVM (Tests can however be executed in different JVM on the
* same machine simultaneously so be careful two concurrent tests end up fighting over ports or
* other singular resources).</li>
* <li>ideally, the whole large test-suite/class, no matter how many or how few test methods it has,
* will run in last less than three minutes</li>
* <li>No large test can take longer than ten minutes; it will be killed. See 'Integeration Tests'
* if you need to run tests longer than this.</li>
* </ul>
*
* @see SmallTests
* @see MediumTests
* @see IntegrationTests

View File

@ -15,12 +15,10 @@
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.hadoop.hbase.testclassification;
/**
* Tag a test as related to mapred or mapreduce.
*
* @see org.apache.hadoop.hbase.testclassification.ClientTests
* @see org.apache.hadoop.hbase.testclassification.CoprocessorTests
* @see org.apache.hadoop.hbase.testclassification.FilterTests

View File

@ -15,12 +15,10 @@
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.hadoop.hbase.testclassification;
/**
* Tag a test as related to the master.
*
* @see org.apache.hadoop.hbase.testclassification.ClientTests
* @see org.apache.hadoop.hbase.testclassification.CoprocessorTests
* @see org.apache.hadoop.hbase.testclassification.FilterTests

View File

@ -1,4 +1,4 @@
/**
/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
@ -15,21 +15,18 @@
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.hadoop.hbase.testclassification;
/**
* Tagging a test as 'medium' means that the test class has the following characteristics:
* <ul>
* <li>it can be executed in an isolated JVM (Tests can however be executed in different JVMs on
* the same machine simultaneously so be careful two concurrent tests end up fighting over ports
* or other singular resources).</li>
* <li>ideally, the whole medium test-suite/class, no matter how many or how few test methods it
* has, will complete in 50 seconds; otherwise make it a 'large' test.</li>
* <li>it can be executed in an isolated JVM (Tests can however be executed in different JVMs on the
* same machine simultaneously so be careful two concurrent tests end up fighting over ports or
* other singular resources).</li>
* <li>ideally, the whole medium test-suite/class, no matter how many or how few test methods it
* has, will complete in 50 seconds; otherwise make it a 'large' test.</li>
* </ul>
*
* Use it for tests that cannot be tagged as 'small'. Use it when you need to start up a cluster.
*
* Use it for tests that cannot be tagged as 'small'. Use it when you need to start up a cluster.
* @see SmallTests
* @see LargeTests
* @see IntegrationTests

View File

@ -15,7 +15,6 @@
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.hadoop.hbase.testclassification;
/**

View File

@ -15,12 +15,10 @@
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.hadoop.hbase.testclassification;
/**
* Tag a test as not easily falling into any of the below categories.
*
* @see org.apache.hadoop.hbase.testclassification.ClientTests
* @see org.apache.hadoop.hbase.testclassification.CoprocessorTests
* @see org.apache.hadoop.hbase.testclassification.FilterTests

View File

@ -15,12 +15,10 @@
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.hadoop.hbase.testclassification;
/**
* Tag a test as related to RPC.
*
* @see org.apache.hadoop.hbase.testclassification.ClientTests
* @see org.apache.hadoop.hbase.testclassification.CoprocessorTests
* @see org.apache.hadoop.hbase.testclassification.FilterTests

View File

@ -15,12 +15,10 @@
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.hadoop.hbase.testclassification;
/**
* Tag a test as related to the regionserver.
*
* @see org.apache.hadoop.hbase.testclassification.ClientTests
* @see org.apache.hadoop.hbase.testclassification.CoprocessorTests
* @see org.apache.hadoop.hbase.testclassification.FilterTests

View File

@ -15,12 +15,10 @@
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.hadoop.hbase.testclassification;
/**
* Tag a test as related to replication.
*
* @see org.apache.hadoop.hbase.testclassification.ClientTests
* @see org.apache.hadoop.hbase.testclassification.CoprocessorTests
* @see org.apache.hadoop.hbase.testclassification.FilterTests

View File

@ -15,12 +15,10 @@
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.hadoop.hbase.testclassification;
/**
* Tag a test as related to the REST capability of HBase.
*
* @see org.apache.hadoop.hbase.testclassification.ClientTests
* @see org.apache.hadoop.hbase.testclassification.CoprocessorTests
* @see org.apache.hadoop.hbase.testclassification.FilterTests

View File

@ -15,12 +15,10 @@
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.hadoop.hbase.testclassification;
/**
* Tag a test as related to security.
*
* @see org.apache.hadoop.hbase.testclassification.ClientTests
* @see org.apache.hadoop.hbase.testclassification.CoprocessorTests
* @see org.apache.hadoop.hbase.testclassification.FilterTests

View File

@ -1,4 +1,4 @@
/**
/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
@ -20,14 +20,14 @@ package org.apache.hadoop.hbase.testclassification;
/**
* Tagging a test as 'small' means that the test class has the following characteristics:
* <ul>
* <li>it can be run simultaneously with other small tests all in the same JVM</li>
* <li>ideally, the WHOLE implementing test-suite/class, no matter how many or how few test
* methods it has, should take less than 15 seconds to complete</li>
* <li>it does not use a cluster</li>
* <li>it can be run simultaneously with other small tests all in the same JVM</li>
* <li>ideally, the WHOLE implementing test-suite/class, no matter how many or how few test methods
* it has, should take less than 15 seconds to complete</li>
* <li>it does not use a cluster</li>
* </ul>
*
* @see MediumTests
* @see LargeTests
* @see IntegrationTests
*/
public interface SmallTests {}
public interface SmallTests {
}

View File

@ -15,13 +15,11 @@
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.hadoop.hbase.testclassification;
/**
* Tag a test as related to mapreduce and taking longer than 5 minutes to run on public build
* Tag a test as related to mapreduce and taking longer than 5 minutes to run on public build
* infrastructure.
*
* @see org.apache.hadoop.hbase.testclassification.ClientTests
* @see org.apache.hadoop.hbase.testclassification.CoprocessorTests
* @see org.apache.hadoop.hbase.testclassification.FilterTests

View File

@ -15,13 +15,11 @@
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.hadoop.hbase.testclassification;
/**
* Tag a test as region tests which takes longer than 5 minutes to run on public build
* infrastructure.
*
* @see org.apache.hadoop.hbase.testclassification.ClientTests
* @see org.apache.hadoop.hbase.testclassification.CoprocessorTests
* @see org.apache.hadoop.hbase.testclassification.FilterTests

View File

@ -15,7 +15,6 @@
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.hadoop.hbase.testclassification;
/**

View File

@ -1,6 +1,5 @@
<?xml version="1.0"?>
<project xmlns="https://maven.apache.org/POM/4.0.0" xmlns:xsi="https://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="https://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd">
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="https://maven.apache.org/POM/4.0.0" xmlns:xsi="https://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="https://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd">
<!--
/**
* Licensed to the Apache Software Foundation (ASF) under one
@ -23,8 +22,8 @@
<modelVersion>4.0.0</modelVersion>
<parent>
<artifactId>hbase-archetypes</artifactId>
<groupId>org.apache.hbase</groupId>
<artifactId>hbase-archetypes</artifactId>
<version>2.5.0-SNAPSHOT</version>
<relativePath>..</relativePath>
</parent>
@ -58,10 +57,10 @@
further using xml-maven-plugin for xslt transformation, below. -->
<execution>
<id>hbase-client__copy-src-to-build-archetype-subdir</id>
<phase>generate-resources</phase>
<goals>
<goal>copy-resources</goal>
</goals>
<phase>generate-resources</phase>
<configuration>
<outputDirectory>/${project.basedir}/../${hbase-client.dir}/${build.archetype.subdir}</outputDirectory>
<resources>
@ -76,29 +75,30 @@
</execution>
<execution>
<id>hbase-client__copy-pom-to-temp-for-xslt-processing</id>
<phase>generate-resources</phase>
<goals>
<goal>copy-resources</goal>
</goals>
<phase>generate-resources</phase>
<configuration>
<outputDirectory>/${project.basedir}/../${hbase-client.dir}/${temp.exemplar.subdir}</outputDirectory>
<resources>
<resource>
<directory>/${project.basedir}/../${hbase-client.dir}</directory>
<filtering>true</filtering> <!-- filtering replaces ${project.version} with literal -->
<filtering>true</filtering>
<!-- filtering replaces ${project.version} with literal -->
<includes>
<include>pom.xml</include>
</includes>
</resource>
</resource>
</resources>
</configuration>
</execution>
<execution>
<id>hbase-shaded-client__copy-src-to-build-archetype-subdir</id>
<phase>generate-resources</phase>
<goals>
<goal>copy-resources</goal>
</goals>
<phase>generate-resources</phase>
<configuration>
<outputDirectory>/${project.basedir}/../${hbase-shaded-client.dir}/${build.archetype.subdir}</outputDirectory>
<resources>
@ -113,20 +113,21 @@
</execution>
<execution>
<id>hbase-shaded-client__copy-pom-to-temp-for-xslt-processing</id>
<phase>generate-resources</phase>
<goals>
<goal>copy-resources</goal>
</goals>
<phase>generate-resources</phase>
<configuration>
<outputDirectory>/${project.basedir}/../${hbase-shaded-client.dir}/${temp.exemplar.subdir}</outputDirectory>
<resources>
<resource>
<directory>/${project.basedir}/../${hbase-shaded-client.dir}</directory>
<filtering>true</filtering> <!-- filtering replaces ${project.version} with literal -->
<filtering>true</filtering>
<!-- filtering replaces ${project.version} with literal -->
<includes>
<include>pom.xml</include>
</includes>
</resource>
</resource>
</resources>
</configuration>
</execution>
@ -137,10 +138,10 @@
using xml-maven-plugin for xslt transformation, below. -->
<execution>
<id>hbase-client-ARCHETYPE__copy-pom-to-temp-for-xslt-processing</id>
<phase>prepare-package</phase>
<goals>
<goal>copy-resources</goal>
</goals>
<phase>prepare-package</phase>
<configuration>
<outputDirectory>/${project.basedir}/../${hbase-client.dir}/${temp.archetype.subdir}</outputDirectory>
<resources>
@ -149,16 +150,16 @@
<includes>
<include>pom.xml</include>
</includes>
</resource>
</resource>
</resources>
</configuration>
</execution>
<execution>
<id>hbase-shaded-client-ARCHETYPE__copy-pom-to-temp-for-xslt-processing</id>
<phase>prepare-package</phase>
<goals>
<goal>copy-resources</goal>
</goals>
<phase>prepare-package</phase>
<configuration>
<outputDirectory>/${project.basedir}/../${hbase-shaded-client.dir}/${temp.archetype.subdir}</outputDirectory>
<resources>
@ -167,7 +168,7 @@
<includes>
<include>pom.xml</include>
</includes>
</resource>
</resource>
</resources>
</configuration>
</execution>
@ -183,10 +184,10 @@
<!-- xml-maven-plugin modifies each exemplar project's pom.xml file to convert to standalone project. -->
<execution>
<id>modify-exemplar-pom-files-via-xslt</id>
<phase>process-resources</phase>
<goals>
<goal>transform</goal>
</goals>
<phase>process-resources</phase>
<configuration>
<transformationSets>
<transformationSet>
@ -213,10 +214,10 @@
prevent warnings when project is generated from archetype. -->
<execution>
<id>modify-archetype-pom-files-via-xslt</id>
<phase>package</phase>
<goals>
<goal>transform</goal>
</goals>
<phase>package</phase>
<configuration>
<transformationSets>
<transformationSet>
@ -243,32 +244,32 @@
</plugin>
<plugin>
<artifactId>maven-antrun-plugin</artifactId>
<artifactId>maven-antrun-plugin</artifactId>
<executions>
<!-- exec-maven-plugin executes chmod to make scripts executable -->
<execution>
<id>make-scripts-executable</id>
<phase>process-resources</phase>
<goals>
<goal>run</goal>
</goals>
<phase>process-resources</phase>
<configuration>
<chmod file="${project.basedir}/createArchetypes.sh" perm="+x" />
<chmod file="${project.basedir}/installArchetypes.sh" perm="+x" />
<chmod file="${project.basedir}/createArchetypes.sh" perm="+x"/>
<chmod file="${project.basedir}/installArchetypes.sh" perm="+x"/>
</configuration>
</execution>
<!-- exec-maven-plugin executes script which invokes 'archetype:create-from-project'
to derive archetypes from exemplar projects. -->
<execution>
<id>run-createArchetypes-script</id>
<phase>compile</phase>
<goals>
<goal>run</goal>
</goals>
<phase>compile</phase>
<configuration>
<exec executable="${shell-executable}" dir="${project.basedir}" failonerror="true">
<arg line="./createArchetypes.sh"/>
</exec>
<exec dir="${project.basedir}" executable="${shell-executable}" failonerror="true">
<arg line="./createArchetypes.sh"/>
</exec>
</configuration>
</execution>
<!-- exec-maven-plugin executes script which invokes 'install' to install each
@ -278,14 +279,14 @@
which does test generation of a project based on the archetype. -->
<execution>
<id>run-installArchetypes-script</id>
<phase>install</phase>
<goals>
<goal>run</goal>
</goals>
<phase>install</phase>
<configuration>
<exec executable="${shell-executable}" dir="${project.basedir}" failonerror="true">
<arg line="./installArchetypes.sh"/>
</exec>
<exec dir="${project.basedir}" executable="${shell-executable}" failonerror="true">
<arg line="./installArchetypes.sh"/>
</exec>
</configuration>
</execution>
</executions>

View File

@ -1,8 +1,5 @@
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="https://maven.apache.org/POM/4.0.0"
xmlns:xsi="https://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation=
"https://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd">
<project xmlns="https://maven.apache.org/POM/4.0.0" xmlns:xsi="https://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="https://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd">
<!--
/**
* Licensed to the Apache Software Foundation (ASF) under one
@ -24,8 +21,8 @@
-->
<modelVersion>4.0.0</modelVersion>
<parent>
<artifactId>hbase-archetypes</artifactId>
<groupId>org.apache.hbase</groupId>
<artifactId>hbase-archetypes</artifactId>
<version>2.5.0-SNAPSHOT</version>
<relativePath>..</relativePath>
</parent>

View File

@ -1,5 +1,4 @@
/**
*
/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
@ -38,19 +37,17 @@ import org.apache.hadoop.hbase.client.TableDescriptorBuilder;
import org.apache.hadoop.hbase.util.Bytes;
/**
* Successful running of this application requires access to an active instance
* of HBase. For install instructions for a standalone instance of HBase, please
* refer to https://hbase.apache.org/book.html#quickstart
* Successful running of this application requires access to an active instance of HBase. For
* install instructions for a standalone instance of HBase, please refer to
* https://hbase.apache.org/book.html#quickstart
*/
public final class HelloHBase {
protected static final String MY_NAMESPACE_NAME = "myTestNamespace";
static final TableName MY_TABLE_NAME = TableName.valueOf("myTestTable");
static final byte[] MY_COLUMN_FAMILY_NAME = Bytes.toBytes("cf");
static final byte[] MY_FIRST_COLUMN_QUALIFIER
= Bytes.toBytes("myFirstColumn");
static final byte[] MY_SECOND_COLUMN_QUALIFIER
= Bytes.toBytes("mySecondColumn");
static final byte[] MY_FIRST_COLUMN_QUALIFIER = Bytes.toBytes("myFirstColumn");
static final byte[] MY_SECOND_COLUMN_QUALIFIER = Bytes.toBytes("mySecondColumn");
static final byte[] MY_ROW_ID = Bytes.toBytes("rowId01");
// Private constructor included here to avoid checkstyle warnings
@ -61,21 +58,21 @@ public final class HelloHBase {
final boolean deleteAllAtEOJ = true;
/**
* ConnectionFactory#createConnection() automatically looks for
* hbase-site.xml (HBase configuration parameters) on the system's
* CLASSPATH, to enable creation of Connection to HBase via ZooKeeper.
* ConnectionFactory#createConnection() automatically looks for hbase-site.xml (HBase
* configuration parameters) on the system's CLASSPATH, to enable creation of Connection to
* HBase via ZooKeeper.
*/
try (Connection connection = ConnectionFactory.createConnection();
Admin admin = connection.getAdmin()) {
Admin admin = connection.getAdmin()) {
admin.getClusterStatus(); // assure connection successfully established
System.out.println("\n*** Hello HBase! -- Connection has been "
+ "established via ZooKeeper!!\n");
System.out
.println("\n*** Hello HBase! -- Connection has been " + "established via ZooKeeper!!\n");
createNamespaceAndTable(admin);
System.out.println("Getting a Table object for [" + MY_TABLE_NAME
+ "] with which to perform CRUD operations in HBase.");
+ "] with which to perform CRUD operations in HBase.");
try (Table table = connection.getTable(MY_TABLE_NAME)) {
putRowToTable(table);
@ -93,9 +90,8 @@ public final class HelloHBase {
}
/**
* Invokes Admin#createNamespace and Admin#createTable to create a namespace
* with a table that has one column-family.
*
* Invokes Admin#createNamespace and Admin#createTable to create a namespace with a table that has
* one column-family.
* @param admin Standard Admin object
* @throws IOException If IO problem encountered
*/
@ -104,48 +100,38 @@ public final class HelloHBase {
if (!namespaceExists(admin, MY_NAMESPACE_NAME)) {
System.out.println("Creating Namespace [" + MY_NAMESPACE_NAME + "].");
admin.createNamespace(NamespaceDescriptor
.create(MY_NAMESPACE_NAME).build());
admin.createNamespace(NamespaceDescriptor.create(MY_NAMESPACE_NAME).build());
}
if (!admin.tableExists(MY_TABLE_NAME)) {
System.out.println("Creating Table [" + MY_TABLE_NAME.getNameAsString()
+ "], with one Column Family ["
+ Bytes.toString(MY_COLUMN_FAMILY_NAME) + "].");
+ "], with one Column Family [" + Bytes.toString(MY_COLUMN_FAMILY_NAME) + "].");
TableDescriptor desc = TableDescriptorBuilder.newBuilder(MY_TABLE_NAME)
.setColumnFamily(ColumnFamilyDescriptorBuilder.of(MY_COLUMN_FAMILY_NAME))
.build();
.setColumnFamily(ColumnFamilyDescriptorBuilder.of(MY_COLUMN_FAMILY_NAME)).build();
admin.createTable(desc);
}
}
/**
* Invokes Table#put to store a row (with two new columns created 'on the
* fly') into the table.
*
* Invokes Table#put to store a row (with two new columns created 'on the fly') into the table.
* @param table Standard Table object (used for CRUD operations).
* @throws IOException If IO problem encountered
*/
static void putRowToTable(final Table table) throws IOException {
table.put(new Put(MY_ROW_ID).addColumn(MY_COLUMN_FAMILY_NAME,
MY_FIRST_COLUMN_QUALIFIER,
Bytes.toBytes("Hello")).addColumn(MY_COLUMN_FAMILY_NAME,
MY_SECOND_COLUMN_QUALIFIER,
Bytes.toBytes("World!")));
table.put(new Put(MY_ROW_ID)
.addColumn(MY_COLUMN_FAMILY_NAME, MY_FIRST_COLUMN_QUALIFIER, Bytes.toBytes("Hello"))
.addColumn(MY_COLUMN_FAMILY_NAME, MY_SECOND_COLUMN_QUALIFIER, Bytes.toBytes("World!")));
System.out.println("Row [" + Bytes.toString(MY_ROW_ID)
+ "] was put into Table ["
+ table.getName().getNameAsString() + "] in HBase;\n"
+ " the row's two columns (created 'on the fly') are: ["
+ Bytes.toString(MY_COLUMN_FAMILY_NAME) + ":"
+ Bytes.toString(MY_FIRST_COLUMN_QUALIFIER)
+ "] and [" + Bytes.toString(MY_COLUMN_FAMILY_NAME) + ":"
+ Bytes.toString(MY_SECOND_COLUMN_QUALIFIER) + "]");
System.out.println("Row [" + Bytes.toString(MY_ROW_ID) + "] was put into Table ["
+ table.getName().getNameAsString() + "] in HBase;\n"
+ " the row's two columns (created 'on the fly') are: ["
+ Bytes.toString(MY_COLUMN_FAMILY_NAME) + ":" + Bytes.toString(MY_FIRST_COLUMN_QUALIFIER)
+ "] and [" + Bytes.toString(MY_COLUMN_FAMILY_NAME) + ":"
+ Bytes.toString(MY_SECOND_COLUMN_QUALIFIER) + "]");
}
/**
* Invokes Table#get and prints out the contents of the retrieved row.
*
* @param table Standard Table object
* @throws IOException If IO problem encountered
*/
@ -153,38 +139,32 @@ public final class HelloHBase {
Result row = table.get(new Get(MY_ROW_ID));
System.out.println("Row [" + Bytes.toString(row.getRow())
+ "] was retrieved from Table ["
+ table.getName().getNameAsString()
+ "] in HBase, with the following content:");
System.out.println("Row [" + Bytes.toString(row.getRow()) + "] was retrieved from Table ["
+ table.getName().getNameAsString() + "] in HBase, with the following content:");
for (Entry<byte[], NavigableMap<byte[], byte[]>> colFamilyEntry
: row.getNoVersionMap().entrySet()) {
for (Entry<byte[], NavigableMap<byte[], byte[]>> colFamilyEntry : row.getNoVersionMap()
.entrySet()) {
String columnFamilyName = Bytes.toString(colFamilyEntry.getKey());
System.out.println(" Columns in Column Family [" + columnFamilyName
+ "]:");
System.out.println(" Columns in Column Family [" + columnFamilyName + "]:");
for (Entry<byte[], byte[]> columnNameAndValueMap
: colFamilyEntry.getValue().entrySet()) {
for (Entry<byte[], byte[]> columnNameAndValueMap : colFamilyEntry.getValue().entrySet()) {
System.out.println(" Value of Column [" + columnFamilyName + ":"
+ Bytes.toString(columnNameAndValueMap.getKey()) + "] == "
+ Bytes.toString(columnNameAndValueMap.getValue()));
+ Bytes.toString(columnNameAndValueMap.getKey()) + "] == "
+ Bytes.toString(columnNameAndValueMap.getValue()));
}
}
}
/**
* Checks to see whether a namespace exists.
*
* @param admin Standard Admin object
* @param admin Standard Admin object
* @param namespaceName Name of namespace
* @return true If namespace exists
* @throws IOException If IO problem encountered
*/
static boolean namespaceExists(final Admin admin, final String namespaceName)
throws IOException {
static boolean namespaceExists(final Admin admin, final String namespaceName) throws IOException {
try {
admin.getNamespaceDescriptor(namespaceName);
} catch (NamespaceNotFoundException e) {
@ -195,28 +175,24 @@ public final class HelloHBase {
/**
* Invokes Table#delete to delete test data (i.e. the row)
*
* @param table Standard Table object
* @throws IOException If IO problem is encountered
*/
static void deleteRow(final Table table) throws IOException {
System.out.println("Deleting row [" + Bytes.toString(MY_ROW_ID)
+ "] from Table ["
+ table.getName().getNameAsString() + "].");
System.out.println("Deleting row [" + Bytes.toString(MY_ROW_ID) + "] from Table ["
+ table.getName().getNameAsString() + "].");
table.delete(new Delete(MY_ROW_ID));
}
/**
* Invokes Admin#disableTable, Admin#deleteTable, and Admin#deleteNamespace to
* disable/delete Table and delete Namespace.
*
* Invokes Admin#disableTable, Admin#deleteTable, and Admin#deleteNamespace to disable/delete
* Table and delete Namespace.
* @param admin Standard Admin object
* @throws IOException If IO problem is encountered
*/
static void deleteNamespaceAndTable(final Admin admin) throws IOException {
if (admin.tableExists(MY_TABLE_NAME)) {
System.out.println("Disabling/deleting Table ["
+ MY_TABLE_NAME.getNameAsString() + "].");
System.out.println("Disabling/deleting Table [" + MY_TABLE_NAME.getNameAsString() + "].");
admin.disableTable(MY_TABLE_NAME); // Disable a table before deleting it.
admin.deleteTable(MY_TABLE_NAME);
}

View File

@ -1,4 +1,4 @@
/**
/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
@ -44,10 +44,9 @@ public class TestHelloHBase {
@ClassRule
public static final HBaseClassTestRule CLASS_RULE =
HBaseClassTestRule.forClass(TestHelloHBase.class);
HBaseClassTestRule.forClass(TestHelloHBase.class);
private static final HBaseTestingUtility TEST_UTIL
= new HBaseTestingUtility();
private static final HBaseTestingUtility TEST_UTIL = new HBaseTestingUtility();
@BeforeClass
public static void beforeClass() throws Exception {
@ -67,13 +66,11 @@ public class TestHelloHBase {
Admin admin = TEST_UTIL.getAdmin();
exists = HelloHBase.namespaceExists(admin, NONEXISTENT_NAMESPACE);
assertEquals("#namespaceExists failed: found nonexistent namespace.",
false, exists);
assertEquals("#namespaceExists failed: found nonexistent namespace.", false, exists);
admin.createNamespace(NamespaceDescriptor.create(EXISTING_NAMESPACE).build());
exists = HelloHBase.namespaceExists(admin, EXISTING_NAMESPACE);
assertEquals("#namespaceExists failed: did NOT find existing namespace.",
true, exists);
assertEquals("#namespaceExists failed: did NOT find existing namespace.", true, exists);
admin.deleteNamespace(EXISTING_NAMESPACE);
}
@ -82,14 +79,11 @@ public class TestHelloHBase {
Admin admin = TEST_UTIL.getAdmin();
HelloHBase.createNamespaceAndTable(admin);
boolean namespaceExists
= HelloHBase.namespaceExists(admin, HelloHBase.MY_NAMESPACE_NAME);
assertEquals("#createNamespaceAndTable failed to create namespace.",
true, namespaceExists);
boolean namespaceExists = HelloHBase.namespaceExists(admin, HelloHBase.MY_NAMESPACE_NAME);
assertEquals("#createNamespaceAndTable failed to create namespace.", true, namespaceExists);
boolean tableExists = admin.tableExists(HelloHBase.MY_TABLE_NAME);
assertEquals("#createNamespaceAndTable failed to create table.",
true, tableExists);
assertEquals("#createNamespaceAndTable failed to create table.", true, tableExists);
admin.disableTable(HelloHBase.MY_TABLE_NAME);
admin.deleteTable(HelloHBase.MY_TABLE_NAME);
@ -100,8 +94,7 @@ public class TestHelloHBase {
public void testPutRowToTable() throws IOException {
Admin admin = TEST_UTIL.getAdmin();
admin.createNamespace(NamespaceDescriptor.create(HelloHBase.MY_NAMESPACE_NAME).build());
Table table
= TEST_UTIL.createTable(HelloHBase.MY_TABLE_NAME, HelloHBase.MY_COLUMN_FAMILY_NAME);
Table table = TEST_UTIL.createTable(HelloHBase.MY_TABLE_NAME, HelloHBase.MY_COLUMN_FAMILY_NAME);
HelloHBase.putRowToTable(table);
Result row = table.get(new Get(HelloHBase.MY_ROW_ID));
@ -115,13 +108,10 @@ public class TestHelloHBase {
public void testDeleteRow() throws IOException {
Admin admin = TEST_UTIL.getAdmin();
admin.createNamespace(NamespaceDescriptor.create(HelloHBase.MY_NAMESPACE_NAME).build());
Table table
= TEST_UTIL.createTable(HelloHBase.MY_TABLE_NAME, HelloHBase.MY_COLUMN_FAMILY_NAME);
Table table = TEST_UTIL.createTable(HelloHBase.MY_TABLE_NAME, HelloHBase.MY_COLUMN_FAMILY_NAME);
table.put(new Put(HelloHBase.MY_ROW_ID).
addColumn(HelloHBase.MY_COLUMN_FAMILY_NAME,
HelloHBase.MY_FIRST_COLUMN_QUALIFIER,
Bytes.toBytes("xyz")));
table.put(new Put(HelloHBase.MY_ROW_ID).addColumn(HelloHBase.MY_COLUMN_FAMILY_NAME,
HelloHBase.MY_FIRST_COLUMN_QUALIFIER, Bytes.toBytes("xyz")));
HelloHBase.deleteRow(table);
Result row = table.get(new Get(HelloHBase.MY_ROW_ID));
assertEquals("#deleteRow failed to delete row.", true, row.isEmpty());

View File

@ -1,8 +1,5 @@
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="https://maven.apache.org/POM/4.0.0"
xmlns:xsi="https://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation=
"https://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd">
<project xmlns="https://maven.apache.org/POM/4.0.0" xmlns:xsi="https://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="https://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd">
<!--
/**
* Licensed to the Apache Software Foundation (ASF) under one
@ -24,8 +21,8 @@
-->
<modelVersion>4.0.0</modelVersion>
<parent>
<artifactId>hbase-archetypes</artifactId>
<groupId>org.apache.hbase</groupId>
<artifactId>hbase-archetypes</artifactId>
<version>2.5.0-SNAPSHOT</version>
<relativePath>..</relativePath>
</parent>
@ -44,16 +41,16 @@
<groupId>org.apache.hbase</groupId>
<artifactId>hbase-testing-util</artifactId>
<scope>test</scope>
<exclusions>
<exclusion>
<groupId>javax.xml.bind</groupId>
<artifactId>jaxb-api</artifactId>
</exclusion>
<exclusion>
<groupId>javax.ws.rs</groupId>
<artifactId>jsr311-api</artifactId>
</exclusion>
</exclusions>
<exclusions>
<exclusion>
<groupId>javax.xml.bind</groupId>
<artifactId>jaxb-api</artifactId>
</exclusion>
<exclusion>
<groupId>javax.ws.rs</groupId>
<artifactId>jsr311-api</artifactId>
</exclusion>
</exclusions>
</dependency>
<dependency>
<groupId>org.apache.hbase</groupId>

View File

@ -1,5 +1,4 @@
/**
*
/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
@ -37,19 +36,17 @@ import org.apache.hadoop.hbase.client.Table;
import org.apache.hadoop.hbase.util.Bytes;
/**
* Successful running of this application requires access to an active instance
* of HBase. For install instructions for a standalone instance of HBase, please
* refer to https://hbase.apache.org/book.html#quickstart
* Successful running of this application requires access to an active instance of HBase. For
* install instructions for a standalone instance of HBase, please refer to
* https://hbase.apache.org/book.html#quickstart
*/
public final class HelloHBase {
protected static final String MY_NAMESPACE_NAME = "myTestNamespace";
static final TableName MY_TABLE_NAME = TableName.valueOf("myTestTable");
static final byte[] MY_COLUMN_FAMILY_NAME = Bytes.toBytes("cf");
static final byte[] MY_FIRST_COLUMN_QUALIFIER
= Bytes.toBytes("myFirstColumn");
static final byte[] MY_SECOND_COLUMN_QUALIFIER
= Bytes.toBytes("mySecondColumn");
static final byte[] MY_FIRST_COLUMN_QUALIFIER = Bytes.toBytes("myFirstColumn");
static final byte[] MY_SECOND_COLUMN_QUALIFIER = Bytes.toBytes("mySecondColumn");
static final byte[] MY_ROW_ID = Bytes.toBytes("rowId01");
// Private constructor included here to avoid checkstyle warnings
@ -60,21 +57,21 @@ public final class HelloHBase {
final boolean deleteAllAtEOJ = true;
/**
* ConnectionFactory#createConnection() automatically looks for
* hbase-site.xml (HBase configuration parameters) on the system's
* CLASSPATH, to enable creation of Connection to HBase via ZooKeeper.
* ConnectionFactory#createConnection() automatically looks for hbase-site.xml (HBase
* configuration parameters) on the system's CLASSPATH, to enable creation of Connection to
* HBase via ZooKeeper.
*/
try (Connection connection = ConnectionFactory.createConnection();
Admin admin = connection.getAdmin()) {
Admin admin = connection.getAdmin()) {
admin.getClusterStatus(); // assure connection successfully established
System.out.println("\n*** Hello HBase! -- Connection has been "
+ "established via ZooKeeper!!\n");
System.out
.println("\n*** Hello HBase! -- Connection has been " + "established via ZooKeeper!!\n");
createNamespaceAndTable(admin);
System.out.println("Getting a Table object for [" + MY_TABLE_NAME
+ "] with which to perform CRUD operations in HBase.");
+ "] with which to perform CRUD operations in HBase.");
try (Table table = connection.getTable(MY_TABLE_NAME)) {
putRowToTable(table);
@ -92,9 +89,8 @@ public final class HelloHBase {
}
/**
* Invokes Admin#createNamespace and Admin#createTable to create a namespace
* with a table that has one column-family.
*
* Invokes Admin#createNamespace and Admin#createTable to create a namespace with a table that has
* one column-family.
* @param admin Standard Admin object
* @throws IOException If IO problem encountered
*/
@ -103,47 +99,38 @@ public final class HelloHBase {
if (!namespaceExists(admin, MY_NAMESPACE_NAME)) {
System.out.println("Creating Namespace [" + MY_NAMESPACE_NAME + "].");
admin.createNamespace(NamespaceDescriptor
.create(MY_NAMESPACE_NAME).build());
admin.createNamespace(NamespaceDescriptor.create(MY_NAMESPACE_NAME).build());
}
if (!admin.tableExists(MY_TABLE_NAME)) {
System.out.println("Creating Table [" + MY_TABLE_NAME.getNameAsString()
+ "], with one Column Family ["
+ Bytes.toString(MY_COLUMN_FAMILY_NAME) + "].");
+ "], with one Column Family [" + Bytes.toString(MY_COLUMN_FAMILY_NAME) + "].");
admin.createTable(new HTableDescriptor(MY_TABLE_NAME)
.addFamily(new HColumnDescriptor(MY_COLUMN_FAMILY_NAME)));
.addFamily(new HColumnDescriptor(MY_COLUMN_FAMILY_NAME)));
}
}
/**
* Invokes Table#put to store a row (with two new columns created 'on the
* fly') into the table.
*
* Invokes Table#put to store a row (with two new columns created 'on the fly') into the table.
* @param table Standard Table object (used for CRUD operations).
* @throws IOException If IO problem encountered
*/
static void putRowToTable(final Table table) throws IOException {
table.put(new Put(MY_ROW_ID).addColumn(MY_COLUMN_FAMILY_NAME,
MY_FIRST_COLUMN_QUALIFIER,
Bytes.toBytes("Hello")).addColumn(MY_COLUMN_FAMILY_NAME,
MY_SECOND_COLUMN_QUALIFIER,
Bytes.toBytes("World!")));
table.put(new Put(MY_ROW_ID)
.addColumn(MY_COLUMN_FAMILY_NAME, MY_FIRST_COLUMN_QUALIFIER, Bytes.toBytes("Hello"))
.addColumn(MY_COLUMN_FAMILY_NAME, MY_SECOND_COLUMN_QUALIFIER, Bytes.toBytes("World!")));
System.out.println("Row [" + Bytes.toString(MY_ROW_ID)
+ "] was put into Table ["
+ table.getName().getNameAsString() + "] in HBase;\n"
+ " the row's two columns (created 'on the fly') are: ["
+ Bytes.toString(MY_COLUMN_FAMILY_NAME) + ":"
+ Bytes.toString(MY_FIRST_COLUMN_QUALIFIER)
+ "] and [" + Bytes.toString(MY_COLUMN_FAMILY_NAME) + ":"
+ Bytes.toString(MY_SECOND_COLUMN_QUALIFIER) + "]");
System.out.println("Row [" + Bytes.toString(MY_ROW_ID) + "] was put into Table ["
+ table.getName().getNameAsString() + "] in HBase;\n"
+ " the row's two columns (created 'on the fly') are: ["
+ Bytes.toString(MY_COLUMN_FAMILY_NAME) + ":" + Bytes.toString(MY_FIRST_COLUMN_QUALIFIER)
+ "] and [" + Bytes.toString(MY_COLUMN_FAMILY_NAME) + ":"
+ Bytes.toString(MY_SECOND_COLUMN_QUALIFIER) + "]");
}
/**
* Invokes Table#get and prints out the contents of the retrieved row.
*
* @param table Standard Table object
* @throws IOException If IO problem encountered
*/
@ -151,38 +138,32 @@ public final class HelloHBase {
Result row = table.get(new Get(MY_ROW_ID));
System.out.println("Row [" + Bytes.toString(row.getRow())
+ "] was retrieved from Table ["
+ table.getName().getNameAsString()
+ "] in HBase, with the following content:");
System.out.println("Row [" + Bytes.toString(row.getRow()) + "] was retrieved from Table ["
+ table.getName().getNameAsString() + "] in HBase, with the following content:");
for (Entry<byte[], NavigableMap<byte[], byte[]>> colFamilyEntry
: row.getNoVersionMap().entrySet()) {
for (Entry<byte[], NavigableMap<byte[], byte[]>> colFamilyEntry : row.getNoVersionMap()
.entrySet()) {
String columnFamilyName = Bytes.toString(colFamilyEntry.getKey());
System.out.println(" Columns in Column Family [" + columnFamilyName
+ "]:");
System.out.println(" Columns in Column Family [" + columnFamilyName + "]:");
for (Entry<byte[], byte[]> columnNameAndValueMap
: colFamilyEntry.getValue().entrySet()) {
for (Entry<byte[], byte[]> columnNameAndValueMap : colFamilyEntry.getValue().entrySet()) {
System.out.println(" Value of Column [" + columnFamilyName + ":"
+ Bytes.toString(columnNameAndValueMap.getKey()) + "] == "
+ Bytes.toString(columnNameAndValueMap.getValue()));
+ Bytes.toString(columnNameAndValueMap.getKey()) + "] == "
+ Bytes.toString(columnNameAndValueMap.getValue()));
}
}
}
/**
* Checks to see whether a namespace exists.
*
* @param admin Standard Admin object
* @param admin Standard Admin object
* @param namespaceName Name of namespace
* @return true If namespace exists
* @throws IOException If IO problem encountered
*/
static boolean namespaceExists(final Admin admin, final String namespaceName)
throws IOException {
static boolean namespaceExists(final Admin admin, final String namespaceName) throws IOException {
try {
admin.getNamespaceDescriptor(namespaceName);
} catch (NamespaceNotFoundException e) {
@ -193,28 +174,24 @@ public final class HelloHBase {
/**
* Invokes Table#delete to delete test data (i.e. the row)
*
* @param table Standard Table object
* @throws IOException If IO problem is encountered
*/
static void deleteRow(final Table table) throws IOException {
System.out.println("Deleting row [" + Bytes.toString(MY_ROW_ID)
+ "] from Table ["
+ table.getName().getNameAsString() + "].");
System.out.println("Deleting row [" + Bytes.toString(MY_ROW_ID) + "] from Table ["
+ table.getName().getNameAsString() + "].");
table.delete(new Delete(MY_ROW_ID));
}
/**
* Invokes Admin#disableTable, Admin#deleteTable, and Admin#deleteNamespace to
* disable/delete Table and delete Namespace.
*
* Invokes Admin#disableTable, Admin#deleteTable, and Admin#deleteNamespace to disable/delete
* Table and delete Namespace.
* @param admin Standard Admin object
* @throws IOException If IO problem is encountered
*/
static void deleteNamespaceAndTable(final Admin admin) throws IOException {
if (admin.tableExists(MY_TABLE_NAME)) {
System.out.println("Disabling/deleting Table ["
+ MY_TABLE_NAME.getNameAsString() + "].");
System.out.println("Disabling/deleting Table [" + MY_TABLE_NAME.getNameAsString() + "].");
admin.disableTable(MY_TABLE_NAME); // Disable a table before deleting it.
admin.deleteTable(MY_TABLE_NAME);
}

View File

@ -1,4 +1,4 @@
/**
/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
@ -44,10 +44,9 @@ public class TestHelloHBase {
@ClassRule
public static final HBaseClassTestRule CLASS_RULE =
HBaseClassTestRule.forClass(TestHelloHBase.class);
HBaseClassTestRule.forClass(TestHelloHBase.class);
private static final HBaseTestingUtility TEST_UTIL
= new HBaseTestingUtility();
private static final HBaseTestingUtility TEST_UTIL = new HBaseTestingUtility();
@BeforeClass
public static void beforeClass() throws Exception {
@ -67,13 +66,11 @@ public class TestHelloHBase {
Admin admin = TEST_UTIL.getAdmin();
exists = HelloHBase.namespaceExists(admin, NONEXISTENT_NAMESPACE);
assertEquals("#namespaceExists failed: found nonexistent namespace.",
false, exists);
assertEquals("#namespaceExists failed: found nonexistent namespace.", false, exists);
admin.createNamespace(NamespaceDescriptor.create(EXISTING_NAMESPACE).build());
exists = HelloHBase.namespaceExists(admin, EXISTING_NAMESPACE);
assertEquals("#namespaceExists failed: did NOT find existing namespace.",
true, exists);
assertEquals("#namespaceExists failed: did NOT find existing namespace.", true, exists);
admin.deleteNamespace(EXISTING_NAMESPACE);
}
@ -82,14 +79,11 @@ public class TestHelloHBase {
Admin admin = TEST_UTIL.getAdmin();
HelloHBase.createNamespaceAndTable(admin);
boolean namespaceExists
= HelloHBase.namespaceExists(admin, HelloHBase.MY_NAMESPACE_NAME);
assertEquals("#createNamespaceAndTable failed to create namespace.",
true, namespaceExists);
boolean namespaceExists = HelloHBase.namespaceExists(admin, HelloHBase.MY_NAMESPACE_NAME);
assertEquals("#createNamespaceAndTable failed to create namespace.", true, namespaceExists);
boolean tableExists = admin.tableExists(HelloHBase.MY_TABLE_NAME);
assertEquals("#createNamespaceAndTable failed to create table.",
true, tableExists);
assertEquals("#createNamespaceAndTable failed to create table.", true, tableExists);
admin.disableTable(HelloHBase.MY_TABLE_NAME);
admin.deleteTable(HelloHBase.MY_TABLE_NAME);
@ -100,8 +94,7 @@ public class TestHelloHBase {
public void testPutRowToTable() throws IOException {
Admin admin = TEST_UTIL.getAdmin();
admin.createNamespace(NamespaceDescriptor.create(HelloHBase.MY_NAMESPACE_NAME).build());
Table table
= TEST_UTIL.createTable(HelloHBase.MY_TABLE_NAME, HelloHBase.MY_COLUMN_FAMILY_NAME);
Table table = TEST_UTIL.createTable(HelloHBase.MY_TABLE_NAME, HelloHBase.MY_COLUMN_FAMILY_NAME);
HelloHBase.putRowToTable(table);
Result row = table.get(new Get(HelloHBase.MY_ROW_ID));
@ -115,13 +108,10 @@ public class TestHelloHBase {
public void testDeleteRow() throws IOException {
Admin admin = TEST_UTIL.getAdmin();
admin.createNamespace(NamespaceDescriptor.create(HelloHBase.MY_NAMESPACE_NAME).build());
Table table
= TEST_UTIL.createTable(HelloHBase.MY_TABLE_NAME, HelloHBase.MY_COLUMN_FAMILY_NAME);
Table table = TEST_UTIL.createTable(HelloHBase.MY_TABLE_NAME, HelloHBase.MY_COLUMN_FAMILY_NAME);
table.put(new Put(HelloHBase.MY_ROW_ID).
addColumn(HelloHBase.MY_COLUMN_FAMILY_NAME,
HelloHBase.MY_FIRST_COLUMN_QUALIFIER,
Bytes.toBytes("xyz")));
table.put(new Put(HelloHBase.MY_ROW_ID).addColumn(HelloHBase.MY_COLUMN_FAMILY_NAME,
HelloHBase.MY_FIRST_COLUMN_QUALIFIER, Bytes.toBytes("xyz")));
HelloHBase.deleteRow(table);
Result row = table.get(new Get(HelloHBase.MY_ROW_ID));
assertEquals("#deleteRow failed to delete row.", true, row.isEmpty());

View File

@ -1,6 +1,5 @@
<?xml version="1.0"?>
<project xmlns="https://maven.apache.org/POM/4.0.0" xmlns:xsi="https://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="https://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd">
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="https://maven.apache.org/POM/4.0.0" xmlns:xsi="https://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="https://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd">
<!--
/**
* Licensed to the Apache Software Foundation (ASF) under one
@ -22,8 +21,8 @@
-->
<modelVersion>4.0.0</modelVersion>
<parent>
<artifactId>hbase-build-configuration</artifactId>
<groupId>org.apache.hbase</groupId>
<artifactId>hbase-build-configuration</artifactId>
<version>2.5.0-SNAPSHOT</version>
<relativePath>../hbase-build-configuration</relativePath>
</parent>
@ -68,10 +67,10 @@
<artifactId>spotbugs-maven-plugin</artifactId>
<executions>
<execution>
<inherited>false</inherited>
<goals>
<goal>spotbugs</goal>
</goals>
<inherited>false</inherited>
<configuration>
<excludeFilterFile>${project.basedir}/../dev-support/spotbugs-exclude.xml</excludeFilterFile>
</configuration>

View File

@ -1,4 +1,4 @@
<?xml version="1.0"?>
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="https://maven.apache.org/POM/4.0.0" xmlns:xsi="https://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="https://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd">
<!--
/**
@ -21,160 +21,18 @@
-->
<modelVersion>4.0.0</modelVersion>
<parent>
<artifactId>hbase-build-configuration</artifactId>
<groupId>org.apache.hbase</groupId>
<artifactId>hbase-build-configuration</artifactId>
<version>2.5.0-SNAPSHOT</version>
<relativePath>../hbase-build-configuration</relativePath>
</parent>
<artifactId>hbase-assembly</artifactId>
<name>Apache HBase - Assembly</name>
<description>
Module that does project assembly and that is all that it does.
</description>
<packaging>pom</packaging>
<name>Apache HBase - Assembly</name>
<description>Module that does project assembly and that is all that it does.</description>
<properties>
<license.bundles.dependencies>true</license.bundles.dependencies>
</properties>
<build>
<plugins>
<!-- licensing info from our dependencies -->
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-remote-resources-plugin</artifactId>
<executions>
<execution>
<id>aggregate-licenses</id>
<goals>
<goal>process</goal>
</goals>
<configuration>
<properties>
<copyright-end-year>${build.year}</copyright-end-year>
<debug-print-included-work-info>${license.debug.print.included}</debug-print-included-work-info>
<bundled-dependencies>${license.bundles.dependencies}</bundled-dependencies>
<bundled-jquery>${license.bundles.jquery}</bundled-jquery>
<bundled-vega>${license.bundles.vega}</bundled-vega>
<bundled-logo>${license.bundles.logo}</bundled-logo>
<bundled-bootstrap>${license.bundles.bootstrap}</bundled-bootstrap>
</properties>
<resourceBundles>
<resourceBundle>${project.groupId}:hbase-resource-bundle:${project.version}</resourceBundle>
</resourceBundles>
<supplementalModelArtifacts>
<supplementalModelArtifact>${project.groupId}:hbase-resource-bundle:${project.version}</supplementalModelArtifact>
</supplementalModelArtifacts>
<supplementalModels>
<supplementalModel>supplemental-models.xml</supplementalModel>
</supplementalModels>
</configuration>
</execution>
</executions>
</plugin>
<plugin>
<artifactId>maven-assembly-plugin</artifactId>
<configuration>
<!--Else will use hbase-assembly as final name.-->
<finalName>hbase-${project.version}</finalName>
<skipAssembly>false</skipAssembly>
<appendAssemblyId>true</appendAssemblyId>
<tarLongFileMode>posix</tarLongFileMode>
<descriptors>
<descriptor>${assembly.file}</descriptor>
<descriptor>src/main/assembly/client.xml</descriptor>
</descriptors>
</configuration>
</plugin>
<plugin>
<artifactId>maven-dependency-plugin</artifactId>
<executions>
<execution>
<!-- generates the file that will be used by the bin/hbase script in the dev env -->
<id>create-hbase-generated-classpath</id>
<phase>test</phase>
<goals>
<goal>build-classpath</goal>
</goals>
<configuration>
<outputFile>${project.parent.basedir}/target/cached_classpath.txt</outputFile>
<excludeArtifactIds>jline,jruby-complete,hbase-shaded-client,hbase-shaded-client-byo-hadoop,hbase-shaded-mapreduce</excludeArtifactIds>
</configuration>
</execution>
<execution>
<!-- generates the file that will be used by the bin/hbase zkcli script in the dev env -->
<id>create-hbase-generated-classpath-jline</id>
<phase>test</phase>
<goals>
<goal>build-classpath</goal>
</goals>
<configuration>
<outputFile>${project.parent.basedir}/target/cached_classpath_jline.txt</outputFile>
<includeArtifactIds>jline</includeArtifactIds>
</configuration>
</execution>
<execution>
<!-- generates the file that will be used by the bin/hbase shell script in the dev env -->
<id>create-hbase-generated-classpath-jruby</id>
<phase>test</phase>
<goals>
<goal>build-classpath</goal>
</goals>
<configuration>
<outputFile>${project.parent.basedir}/target/cached_classpath_jruby.txt</outputFile>
<includeArtifactIds>jruby-complete</includeArtifactIds>
</configuration>
</execution>
<!--
Build an aggregation of our templated NOTICE file and the NOTICE files in our dependencies.
If MASSEMBLY-382 is fixed we could do this in the assembly
Currently relies on env, bash, find, and cat.
-->
<execution>
<!-- put all of the NOTICE files out of our dependencies -->
<id>unpack-dependency-notices</id>
<phase>prepare-package</phase>
<goals>
<goal>unpack-dependencies</goal>
</goals>
<configuration>
<excludeTypes>pom</excludeTypes>
<useSubDirectoryPerArtifact>true</useSubDirectoryPerArtifact>
<includes>**\/NOTICE,**\/NOTICE.txt</includes>
</configuration>
</execution>
</executions>
</plugin>
<plugin>
<groupId>org.codehaus.mojo</groupId>
<artifactId>exec-maven-plugin</artifactId>
<version>${exec.maven.version}</version>
<executions>
<execution>
<id>concat-NOTICE-files</id>
<phase>package</phase>
<goals>
<goal>exec</goal>
</goals>
<configuration>
<executable>env</executable>
<arguments>
<argument>bash</argument>
<argument>-c</argument>
<argument>cat maven-shared-archive-resources/META-INF/NOTICE \
`find ${project.build.directory}/dependency -iname NOTICE -or -iname NOTICE.txt`
</argument>
</arguments>
<outputFile>${project.build.directory}/NOTICE.aggregate</outputFile>
<workingDirectory>${project.build.directory}</workingDirectory>
</configuration>
</execution>
</executions>
</plugin>
<!-- /end building aggregation of NOTICE files -->
</plugins>
</build>
<dependencies>
<!-- client artifacts for downstream use -->
<dependency>
@ -189,7 +47,7 @@
<groupId>org.apache.hbase</groupId>
<artifactId>hbase-shaded-mapreduce</artifactId>
</dependency>
<!-- Intra-project dependencies -->
<!-- Intra-project dependencies -->
<dependency>
<groupId>org.apache.hbase</groupId>
<artifactId>hbase-it</artifactId>
@ -258,16 +116,16 @@
<artifactId>hbase-external-blockcache</artifactId>
</dependency>
<dependency>
<groupId>org.apache.hbase</groupId>
<artifactId>hbase-testing-util</artifactId>
<groupId>org.apache.hbase</groupId>
<artifactId>hbase-testing-util</artifactId>
</dependency>
<dependency>
<groupId>org.apache.hbase</groupId>
<artifactId>hbase-metrics-api</artifactId>
<groupId>org.apache.hbase</groupId>
<artifactId>hbase-metrics-api</artifactId>
</dependency>
<dependency>
<groupId>org.apache.hbase</groupId>
<artifactId>hbase-metrics</artifactId>
<groupId>org.apache.hbase</groupId>
<artifactId>hbase-metrics</artifactId>
</dependency>
<dependency>
<groupId>org.apache.hbase</groupId>
@ -278,9 +136,9 @@
<artifactId>hbase-protocol-shaded</artifactId>
</dependency>
<dependency>
<groupId>org.apache.hbase</groupId>
<artifactId>hbase-resource-bundle</artifactId>
<optional>true</optional>
<groupId>org.apache.hbase</groupId>
<artifactId>hbase-resource-bundle</artifactId>
<optional>true</optional>
</dependency>
<dependency>
<groupId>org.apache.httpcomponents</groupId>
@ -379,12 +237,151 @@
<artifactId>log4j-1.2-api</artifactId>
</dependency>
</dependencies>
<build>
<plugins>
<!-- licensing info from our dependencies -->
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-remote-resources-plugin</artifactId>
<executions>
<execution>
<id>aggregate-licenses</id>
<goals>
<goal>process</goal>
</goals>
<configuration>
<properties>
<copyright-end-year>${build.year}</copyright-end-year>
<debug-print-included-work-info>${license.debug.print.included}</debug-print-included-work-info>
<bundled-dependencies>${license.bundles.dependencies}</bundled-dependencies>
<bundled-jquery>${license.bundles.jquery}</bundled-jquery>
<bundled-vega>${license.bundles.vega}</bundled-vega>
<bundled-logo>${license.bundles.logo}</bundled-logo>
<bundled-bootstrap>${license.bundles.bootstrap}</bundled-bootstrap>
</properties>
<resourceBundles>
<resourceBundle>${project.groupId}:hbase-resource-bundle:${project.version}</resourceBundle>
</resourceBundles>
<supplementalModelArtifacts>
<supplementalModelArtifact>${project.groupId}:hbase-resource-bundle:${project.version}</supplementalModelArtifact>
</supplementalModelArtifacts>
<supplementalModels>
<supplementalModel>supplemental-models.xml</supplementalModel>
</supplementalModels>
</configuration>
</execution>
</executions>
</plugin>
<plugin>
<artifactId>maven-assembly-plugin</artifactId>
<configuration>
<!--Else will use hbase-assembly as final name.-->
<finalName>hbase-${project.version}</finalName>
<skipAssembly>false</skipAssembly>
<appendAssemblyId>true</appendAssemblyId>
<tarLongFileMode>posix</tarLongFileMode>
<descriptors>
<descriptor>${assembly.file}</descriptor>
<descriptor>src/main/assembly/client.xml</descriptor>
</descriptors>
</configuration>
</plugin>
<plugin>
<artifactId>maven-dependency-plugin</artifactId>
<executions>
<execution>
<!-- generates the file that will be used by the bin/hbase script in the dev env -->
<id>create-hbase-generated-classpath</id>
<goals>
<goal>build-classpath</goal>
</goals>
<phase>test</phase>
<configuration>
<outputFile>${project.parent.basedir}/target/cached_classpath.txt</outputFile>
<excludeArtifactIds>jline,jruby-complete,hbase-shaded-client,hbase-shaded-client-byo-hadoop,hbase-shaded-mapreduce</excludeArtifactIds>
</configuration>
</execution>
<execution>
<!-- generates the file that will be used by the bin/hbase zkcli script in the dev env -->
<id>create-hbase-generated-classpath-jline</id>
<goals>
<goal>build-classpath</goal>
</goals>
<phase>test</phase>
<configuration>
<outputFile>${project.parent.basedir}/target/cached_classpath_jline.txt</outputFile>
<includeArtifactIds>jline</includeArtifactIds>
</configuration>
</execution>
<execution>
<!-- generates the file that will be used by the bin/hbase shell script in the dev env -->
<id>create-hbase-generated-classpath-jruby</id>
<goals>
<goal>build-classpath</goal>
</goals>
<phase>test</phase>
<configuration>
<outputFile>${project.parent.basedir}/target/cached_classpath_jruby.txt</outputFile>
<includeArtifactIds>jruby-complete</includeArtifactIds>
</configuration>
</execution>
<!--
Build an aggregation of our templated NOTICE file and the NOTICE files in our dependencies.
If MASSEMBLY-382 is fixed we could do this in the assembly
Currently relies on env, bash, find, and cat.
-->
<execution>
<!-- put all of the NOTICE files out of our dependencies -->
<id>unpack-dependency-notices</id>
<goals>
<goal>unpack-dependencies</goal>
</goals>
<phase>prepare-package</phase>
<configuration>
<excludeTypes>pom</excludeTypes>
<useSubDirectoryPerArtifact>true</useSubDirectoryPerArtifact>
<includes>**\/NOTICE,**\/NOTICE.txt</includes>
</configuration>
</execution>
</executions>
</plugin>
<plugin>
<groupId>org.codehaus.mojo</groupId>
<artifactId>exec-maven-plugin</artifactId>
<version>${exec.maven.version}</version>
<executions>
<execution>
<id>concat-NOTICE-files</id>
<goals>
<goal>exec</goal>
</goals>
<phase>package</phase>
<configuration>
<executable>env</executable>
<arguments>
<argument>bash</argument>
<argument>-c</argument>
<argument>cat maven-shared-archive-resources/META-INF/NOTICE \
`find ${project.build.directory}/dependency -iname NOTICE -or -iname NOTICE.txt`</argument>
</arguments>
<outputFile>${project.build.directory}/NOTICE.aggregate</outputFile>
<workingDirectory>${project.build.directory}</workingDirectory>
</configuration>
</execution>
</executions>
</plugin>
<!-- /end building aggregation of NOTICE files -->
</plugins>
</build>
<profiles>
<profile>
<id>rsgroup</id>
<activation>
<property>
<name>!skip-rsgroup</name>
<name>!skip-rsgroup</name>
</property>
</activation>
<dependencies>
@ -392,18 +389,18 @@
<groupId>org.apache.hbase</groupId>
<artifactId>hbase-rsgroup</artifactId>
</dependency>
<dependency>
<groupId>junit</groupId>
<artifactId>junit</artifactId>
</dependency>
<dependency>
<groupId>org.mockito</groupId>
<artifactId>mockito-core</artifactId>
<!-- Making it compile here so that it can be downloaded during packaging. -->
<!-- All other modules scope it as test or inherit test scope from parent pom. -->
<scope>compile</scope>
</dependency>
</dependencies>
<dependency>
<groupId>junit</groupId>
<artifactId>junit</artifactId>
</dependency>
<dependency>
<groupId>org.mockito</groupId>
<artifactId>mockito-core</artifactId>
<!-- Making it compile here so that it can be downloaded during packaging. -->
<!-- All other modules scope it as test or inherit test scope from parent pom. -->
<scope>compile</scope>
</dependency>
</dependencies>
</profile>
</profiles>
</project>

View File

@ -1,6 +1,5 @@
<?xml version="1.0"?>
<project xmlns="https://maven.apache.org/POM/4.0.0" xmlns:xsi="https://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="https://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd">
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="https://maven.apache.org/POM/4.0.0" xmlns:xsi="https://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="https://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd">
<!--
/**
* Licensed to the Apache Software Foundation (ASF) under one
@ -22,8 +21,8 @@
-->
<modelVersion>4.0.0</modelVersion>
<parent>
<artifactId>hbase-build-configuration</artifactId>
<groupId>org.apache.hbase</groupId>
<artifactId>hbase-build-configuration</artifactId>
<version>2.5.0-SNAPSHOT</version>
<relativePath>../hbase-build-configuration</relativePath>
</parent>
@ -31,33 +30,6 @@
<artifactId>hbase-asyncfs</artifactId>
<name>Apache HBase - Asynchronous FileSystem</name>
<description>HBase Asynchronous FileSystem Implementation for WAL</description>
<build>
<plugins>
<!-- Make a jar and put the sources in the jar -->
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-source-plugin</artifactId>
</plugin>
<plugin>
<!--Make it so assembly:single does nothing in here-->
<artifactId>maven-assembly-plugin</artifactId>
<configuration>
<skipAssembly>true</skipAssembly>
</configuration>
</plugin>
<plugin>
<groupId>net.revelc.code</groupId>
<artifactId>warbucks-maven-plugin</artifactId>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-checkstyle-plugin</artifactId>
<configuration>
<failOnViolation>true</failOnViolation>
</configuration>
</plugin>
</plugins>
</build>
<dependencies>
<dependency>
@ -169,6 +141,33 @@
<scope>test</scope>
</dependency>
</dependencies>
<build>
<plugins>
<!-- Make a jar and put the sources in the jar -->
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-source-plugin</artifactId>
</plugin>
<plugin>
<!--Make it so assembly:single does nothing in here-->
<artifactId>maven-assembly-plugin</artifactId>
<configuration>
<skipAssembly>true</skipAssembly>
</configuration>
</plugin>
<plugin>
<groupId>net.revelc.code</groupId>
<artifactId>warbucks-maven-plugin</artifactId>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-checkstyle-plugin</artifactId>
<configuration>
<failOnViolation>true</failOnViolation>
</configuration>
</plugin>
</plugins>
</build>
<profiles>
<!-- Profiles for building against different hadoop versions -->
@ -176,8 +175,9 @@
<id>hadoop-2.0</id>
<activation>
<property>
<!--Below formatting for dev-support/generate-hadoopX-poms.sh-->
<!--h2--><name>!hadoop.profile</name>
<!--Below formatting for dev-support/generate-hadoopX-poms.sh-->
<!--h2-->
<name>!hadoop.profile</name>
</property>
</activation>
<dependencies>
@ -265,8 +265,7 @@
<artifactId>lifecycle-mapping</artifactId>
<configuration>
<lifecycleMappingMetadata>
<pluginExecutions>
</pluginExecutions>
<pluginExecutions/>
</lifecycleMappingMetadata>
</configuration>
</plugin>

View File

@ -1,4 +1,4 @@
/**
/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
@ -21,10 +21,9 @@ import java.io.Closeable;
import java.io.IOException;
import java.nio.ByteBuffer;
import java.util.concurrent.CompletableFuture;
import org.apache.yetus.audience.InterfaceAudience;
import org.apache.hadoop.hbase.util.CancelableProgressable;
import org.apache.hadoop.hdfs.protocol.DatanodeInfo;
import org.apache.yetus.audience.InterfaceAudience;
/**
* Interface for asynchronous filesystem output stream.

View File

@ -1,4 +1,4 @@
/**
/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
@ -47,9 +47,9 @@ public final class AsyncFSOutputHelper {
* implementation for other {@link FileSystem} which wraps around a {@link FSDataOutputStream}.
*/
public static AsyncFSOutput createOutput(FileSystem fs, Path f, boolean overwrite,
boolean createParent, short replication, long blockSize, EventLoopGroup eventLoopGroup,
Class<? extends Channel> channelClass, StreamSlowMonitor monitor)
throws IOException, CommonFSUtils.StreamLacksCapabilityException {
boolean createParent, short replication, long blockSize, EventLoopGroup eventLoopGroup,
Class<? extends Channel> channelClass, StreamSlowMonitor monitor)
throws IOException, CommonFSUtils.StreamLacksCapabilityException {
if (fs instanceof DistributedFileSystem) {
return FanOutOneBlockAsyncDFSOutputHelper.createOutput((DistributedFileSystem) fs, f,
overwrite, createParent, replication, blockSize, eventLoopGroup, channelClass, monitor);

View File

@ -1,4 +1,4 @@
/**
/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
@ -22,9 +22,9 @@ import static org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHel
import static org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.completeFile;
import static org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.endFileLease;
import static org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.getStatus;
import static org.apache.hadoop.hdfs.DFSConfigKeys.DFS_CLIENT_SOCKET_TIMEOUT_KEY;
import static org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleState.READER_IDLE;
import static org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleState.WRITER_IDLE;
import static org.apache.hadoop.hdfs.DFSConfigKeys.DFS_CLIENT_SOCKET_TIMEOUT_KEY;
import com.google.errorprone.annotations.RestrictedApi;
import java.io.IOException;
@ -41,7 +41,6 @@ import java.util.concurrent.ConcurrentLinkedDeque;
import java.util.concurrent.ExecutionException;
import java.util.concurrent.TimeUnit;
import java.util.function.Supplier;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.crypto.Encryptor;
import org.apache.hadoop.fs.Path;
@ -181,7 +180,10 @@ public class FanOutOneBlockAsyncDFSOutput implements AsyncFSOutput {
// State for connections to DN
private enum State {
STREAMING, CLOSING, BROKEN, CLOSED
STREAMING,
CLOSING,
BROKEN,
CLOSED
}
private volatile State state;
@ -197,7 +199,7 @@ public class FanOutOneBlockAsyncDFSOutput implements AsyncFSOutput {
if (c.unfinishedReplicas.remove(channel.id())) {
long current = EnvironmentEdgeManager.currentTime();
streamSlowMonitor.checkProcessTimeAndSpeed(datanodeInfoMap.get(channel), c.packetDataLen,
current - c.flushTimestamp, c.lastAckTimestamp, c.unfinishedReplicas.size());
current - c.flushTimestamp, c.lastAckTimestamp, c.unfinishedReplicas.size());
c.lastAckTimestamp = current;
if (c.unfinishedReplicas.isEmpty()) {
// we need to remove first before complete the future. It is possible that after we
@ -285,13 +287,13 @@ public class FanOutOneBlockAsyncDFSOutput implements AsyncFSOutput {
protected void channelRead0(ChannelHandlerContext ctx, PipelineAckProto ack) throws Exception {
Status reply = getStatus(ack);
if (reply != Status.SUCCESS) {
failed(ctx.channel(), () -> new IOException("Bad response " + reply + " for block " +
block + " from datanode " + ctx.channel().remoteAddress()));
failed(ctx.channel(), () -> new IOException("Bad response " + reply + " for block " + block
+ " from datanode " + ctx.channel().remoteAddress()));
return;
}
if (PipelineAck.isRestartOOBStatus(reply)) {
failed(ctx.channel(), () -> new IOException("Restart response " + reply + " for block " +
block + " from datanode " + ctx.channel().remoteAddress()));
failed(ctx.channel(), () -> new IOException("Restart response " + reply + " for block "
+ block + " from datanode " + ctx.channel().remoteAddress()));
return;
}
if (ack.getSeqno() == HEART_BEAT_SEQNO) {
@ -346,10 +348,10 @@ public class FanOutOneBlockAsyncDFSOutput implements AsyncFSOutput {
}
}
FanOutOneBlockAsyncDFSOutput(Configuration conf,DistributedFileSystem dfs,
DFSClient client, ClientProtocol namenode, String clientName, String src, long fileId,
LocatedBlock locatedBlock, Encryptor encryptor, Map<Channel, DatanodeInfo> datanodeInfoMap,
DataChecksum summer, ByteBufAllocator alloc, StreamSlowMonitor streamSlowMonitor) {
FanOutOneBlockAsyncDFSOutput(Configuration conf, DistributedFileSystem dfs, DFSClient client,
ClientProtocol namenode, String clientName, String src, long fileId, LocatedBlock locatedBlock,
Encryptor encryptor, Map<Channel, DatanodeInfo> datanodeInfoMap, DataChecksum summer,
ByteBufAllocator alloc, StreamSlowMonitor streamSlowMonitor) {
this.conf = conf;
this.dfs = dfs;
this.client = client;
@ -404,7 +406,7 @@ public class FanOutOneBlockAsyncDFSOutput implements AsyncFSOutput {
}
private void flushBuffer(CompletableFuture<Long> future, ByteBuf dataBuf,
long nextPacketOffsetInBlock, boolean syncBlock) {
long nextPacketOffsetInBlock, boolean syncBlock) {
int dataLen = dataBuf.readableBytes();
int chunkLen = summer.getBytesPerChecksum();
int trailingPartialChunkLen = dataLen % chunkLen;
@ -414,13 +416,13 @@ public class FanOutOneBlockAsyncDFSOutput implements AsyncFSOutput {
summer.calculateChunkedSums(dataBuf.nioBuffer(), checksumBuf.nioBuffer(0, checksumLen));
checksumBuf.writerIndex(checksumLen);
PacketHeader header = new PacketHeader(4 + checksumLen + dataLen, nextPacketOffsetInBlock,
nextPacketSeqno, false, dataLen, syncBlock);
nextPacketSeqno, false, dataLen, syncBlock);
int headerLen = header.getSerializedSize();
ByteBuf headerBuf = alloc.buffer(headerLen);
header.putInBuffer(headerBuf.nioBuffer(0, headerLen));
headerBuf.writerIndex(headerLen);
Callback c = new Callback(future, nextPacketOffsetInBlock + dataLen,
datanodeInfoMap.keySet(), dataLen);
Callback c =
new Callback(future, nextPacketOffsetInBlock + dataLen, datanodeInfoMap.keySet(), dataLen);
waitingAckQueue.addLast(c);
// recheck again after we pushed the callback to queue
if (state != State.STREAMING && waitingAckQueue.peekFirst() == c) {
@ -430,7 +432,7 @@ public class FanOutOneBlockAsyncDFSOutput implements AsyncFSOutput {
return;
}
// TODO: we should perhaps measure time taken per DN here;
// we could collect statistics per DN, and/or exclude bad nodes in createOutput.
// we could collect statistics per DN, and/or exclude bad nodes in createOutput.
datanodeInfoMap.keySet().forEach(ch -> {
ch.write(headerBuf.retainedDuplicate());
ch.write(checksumBuf.retainedDuplicate());
@ -515,7 +517,7 @@ public class FanOutOneBlockAsyncDFSOutput implements AsyncFSOutput {
}
trailingPartialChunkLength = dataLen % summer.getBytesPerChecksum();
ByteBuf newBuf = alloc.directBuffer(sendBufSizePRedictor.guess(dataLen))
.ensureWritable(trailingPartialChunkLength);
.ensureWritable(trailingPartialChunkLength);
if (trailingPartialChunkLength != 0) {
buf.readerIndex(dataLen - trailingPartialChunkLength).readBytes(newBuf,
trailingPartialChunkLength);

View File

@ -1,4 +1,4 @@
/**
/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
@ -116,7 +116,7 @@ import org.apache.hbase.thirdparty.io.netty.util.concurrent.Promise;
@InterfaceAudience.Private
public final class FanOutOneBlockAsyncDFSOutputHelper {
private static final Logger LOG =
LoggerFactory.getLogger(FanOutOneBlockAsyncDFSOutputHelper.class);
LoggerFactory.getLogger(FanOutOneBlockAsyncDFSOutputHelper.class);
private FanOutOneBlockAsyncDFSOutputHelper() {
}
@ -154,9 +154,8 @@ public final class FanOutOneBlockAsyncDFSOutputHelper {
// helper class for creating files.
private interface FileCreator {
default HdfsFileStatus create(ClientProtocol instance, String src, FsPermission masked,
String clientName, EnumSetWritable<CreateFlag> flag, boolean createParent,
short replication, long blockSize, CryptoProtocolVersion[] supportedVersions)
throws Exception {
String clientName, EnumSetWritable<CreateFlag> flag, boolean createParent, short replication,
long blockSize, CryptoProtocolVersion[] supportedVersions) throws Exception {
try {
return (HdfsFileStatus) createObject(instance, src, masked, clientName, flag, createParent,
replication, blockSize, supportedVersions);
@ -170,8 +169,8 @@ public final class FanOutOneBlockAsyncDFSOutputHelper {
};
Object createObject(ClientProtocol instance, String src, FsPermission masked, String clientName,
EnumSetWritable<CreateFlag> flag, boolean createParent, short replication, long blockSize,
CryptoProtocolVersion[] supportedVersions) throws Exception;
EnumSetWritable<CreateFlag> flag, boolean createParent, short replication, long blockSize,
CryptoProtocolVersion[] supportedVersions) throws Exception;
}
private static final FileCreator FILE_CREATOR;
@ -199,7 +198,7 @@ public final class FanOutOneBlockAsyncDFSOutputHelper {
private static LeaseManager createLeaseManager() throws NoSuchMethodException {
Method beginFileLeaseMethod =
DFSClient.class.getDeclaredMethod("beginFileLease", long.class, DFSOutputStream.class);
DFSClient.class.getDeclaredMethod("beginFileLease", long.class, DFSOutputStream.class);
beginFileLeaseMethod.setAccessible(true);
Method endFileLeaseMethod = DFSClient.class.getDeclaredMethod("endFileLease", long.class);
endFileLeaseMethod.setAccessible(true);
@ -227,13 +226,13 @@ public final class FanOutOneBlockAsyncDFSOutputHelper {
private static FileCreator createFileCreator3_3() throws NoSuchMethodException {
Method createMethod = ClientProtocol.class.getMethod("create", String.class, FsPermission.class,
String.class, EnumSetWritable.class, boolean.class, short.class, long.class,
CryptoProtocolVersion[].class, String.class, String.class);
String.class, EnumSetWritable.class, boolean.class, short.class, long.class,
CryptoProtocolVersion[].class, String.class, String.class);
return (instance, src, masked, clientName, flag, createParent, replication, blockSize,
supportedVersions) -> {
supportedVersions) -> {
return (HdfsFileStatus) createMethod.invoke(instance, src, masked, clientName, flag,
createParent, replication, blockSize, supportedVersions, null, null);
createParent, replication, blockSize, supportedVersions, null, null);
};
}
@ -243,7 +242,7 @@ public final class FanOutOneBlockAsyncDFSOutputHelper {
CryptoProtocolVersion[].class, String.class);
return (instance, src, masked, clientName, flag, createParent, replication, blockSize,
supportedVersions) -> {
supportedVersions) -> {
return (HdfsFileStatus) createMethod.invoke(instance, src, masked, clientName, flag,
createParent, replication, blockSize, supportedVersions, null);
};
@ -255,7 +254,7 @@ public final class FanOutOneBlockAsyncDFSOutputHelper {
CryptoProtocolVersion[].class);
return (instance, src, masked, clientName, flag, createParent, replication, blockSize,
supportedVersions) -> {
supportedVersions) -> {
return (HdfsFileStatus) createMethod.invoke(instance, src, masked, clientName, flag,
createParent, replication, blockSize, supportedVersions);
};
@ -307,9 +306,9 @@ public final class FanOutOneBlockAsyncDFSOutputHelper {
FILE_CREATOR = createFileCreator();
SHOULD_REPLICATE_FLAG = loadShouldReplicateFlag();
} catch (Exception e) {
String msg = "Couldn't properly initialize access to HDFS internals. Please " +
"update your WAL Provider to not make use of the 'asyncfs' provider. See " +
"HBASE-16110 for more information.";
String msg = "Couldn't properly initialize access to HDFS internals. Please "
+ "update your WAL Provider to not make use of the 'asyncfs' provider. See "
+ "HBASE-16110 for more information.";
LOG.error(msg, e);
throw new Error(msg, e);
}
@ -340,7 +339,7 @@ public final class FanOutOneBlockAsyncDFSOutputHelper {
}
private static void processWriteBlockResponse(Channel channel, DatanodeInfo dnInfo,
Promise<Channel> promise, int timeoutMs) {
Promise<Channel> promise, int timeoutMs) {
channel.pipeline().addLast(new IdleStateHandler(timeoutMs, 0, 0, TimeUnit.MILLISECONDS),
new ProtobufVarint32FrameDecoder(),
new ProtobufDecoder(BlockOpResponseProto.getDefaultInstance()),
@ -348,7 +347,7 @@ public final class FanOutOneBlockAsyncDFSOutputHelper {
@Override
protected void channelRead0(ChannelHandlerContext ctx, BlockOpResponseProto resp)
throws Exception {
throws Exception {
Status pipelineStatus = resp.getStatus();
if (PipelineAck.isRestartOOBStatus(pipelineStatus)) {
throw new IOException("datanode " + dnInfo + " is restarting");
@ -356,11 +355,11 @@ public final class FanOutOneBlockAsyncDFSOutputHelper {
String logInfo = "ack with firstBadLink as " + resp.getFirstBadLink();
if (resp.getStatus() != Status.SUCCESS) {
if (resp.getStatus() == Status.ERROR_ACCESS_TOKEN) {
throw new InvalidBlockTokenException("Got access token error" + ", status message " +
resp.getMessage() + ", " + logInfo);
throw new InvalidBlockTokenException("Got access token error" + ", status message "
+ resp.getMessage() + ", " + logInfo);
} else {
throw new IOException("Got error" + ", status=" + resp.getStatus().name() +
", status message " + resp.getMessage() + ", " + logInfo);
throw new IOException("Got error" + ", status=" + resp.getStatus().name()
+ ", status message " + resp.getMessage() + ", " + logInfo);
}
}
// success
@ -387,7 +386,7 @@ public final class FanOutOneBlockAsyncDFSOutputHelper {
public void userEventTriggered(ChannelHandlerContext ctx, Object evt) throws Exception {
if (evt instanceof IdleStateEvent && ((IdleStateEvent) evt).state() == READER_IDLE) {
promise
.tryFailure(new IOException("Timeout(" + timeoutMs + "ms) waiting for response"));
.tryFailure(new IOException("Timeout(" + timeoutMs + "ms) waiting for response"));
} else {
super.userEventTriggered(ctx, evt);
}
@ -401,7 +400,7 @@ public final class FanOutOneBlockAsyncDFSOutputHelper {
}
private static void requestWriteBlock(Channel channel, StorageType storageType,
OpWriteBlockProto.Builder writeBlockProtoBuilder) throws IOException {
OpWriteBlockProto.Builder writeBlockProtoBuilder) throws IOException {
OpWriteBlockProto proto =
writeBlockProtoBuilder.setStorageType(PBHelperClient.convertStorageType(storageType)).build();
int protoLen = proto.getSerializedSize();
@ -414,9 +413,9 @@ public final class FanOutOneBlockAsyncDFSOutputHelper {
}
private static void initialize(Configuration conf, Channel channel, DatanodeInfo dnInfo,
StorageType storageType, OpWriteBlockProto.Builder writeBlockProtoBuilder, int timeoutMs,
DFSClient client, Token<BlockTokenIdentifier> accessToken, Promise<Channel> promise)
throws IOException {
StorageType storageType, OpWriteBlockProto.Builder writeBlockProtoBuilder, int timeoutMs,
DFSClient client, Token<BlockTokenIdentifier> accessToken, Promise<Channel> promise)
throws IOException {
Promise<Void> saslPromise = channel.eventLoop().newPromise();
trySaslNegotiate(conf, channel, dnInfo, timeoutMs, client, accessToken, saslPromise);
saslPromise.addListener(new FutureListener<Void>() {
@ -435,13 +434,13 @@ public final class FanOutOneBlockAsyncDFSOutputHelper {
}
private static List<Future<Channel>> connectToDataNodes(Configuration conf, DFSClient client,
String clientName, LocatedBlock locatedBlock, long maxBytesRcvd, long latestGS,
BlockConstructionStage stage, DataChecksum summer, EventLoopGroup eventLoopGroup,
Class<? extends Channel> channelClass) {
String clientName, LocatedBlock locatedBlock, long maxBytesRcvd, long latestGS,
BlockConstructionStage stage, DataChecksum summer, EventLoopGroup eventLoopGroup,
Class<? extends Channel> channelClass) {
StorageType[] storageTypes = locatedBlock.getStorageTypes();
DatanodeInfo[] datanodeInfos = locatedBlock.getLocations();
boolean connectToDnViaHostname =
conf.getBoolean(DFS_CLIENT_USE_DN_HOSTNAME, DFS_CLIENT_USE_DN_HOSTNAME_DEFAULT);
conf.getBoolean(DFS_CLIENT_USE_DN_HOSTNAME, DFS_CLIENT_USE_DN_HOSTNAME_DEFAULT);
int timeoutMs = conf.getInt(DFS_CLIENT_SOCKET_TIMEOUT_KEY, READ_TIMEOUT);
ExtendedBlock blockCopy = new ExtendedBlock(locatedBlock.getBlock());
blockCopy.setNumBytes(locatedBlock.getBlockSize());
@ -450,11 +449,11 @@ public final class FanOutOneBlockAsyncDFSOutputHelper {
.setToken(PBHelperClient.convert(locatedBlock.getBlockToken())))
.setClientName(clientName).build();
ChecksumProto checksumProto = DataTransferProtoUtil.toProto(summer);
OpWriteBlockProto.Builder writeBlockProtoBuilder = OpWriteBlockProto.newBuilder()
.setHeader(header).setStage(OpWriteBlockProto.BlockConstructionStage.valueOf(stage.name()))
.setPipelineSize(1).setMinBytesRcvd(locatedBlock.getBlock().getNumBytes())
.setMaxBytesRcvd(maxBytesRcvd).setLatestGenerationStamp(latestGS)
.setRequestedChecksum(checksumProto)
OpWriteBlockProto.Builder writeBlockProtoBuilder =
OpWriteBlockProto.newBuilder().setHeader(header)
.setStage(OpWriteBlockProto.BlockConstructionStage.valueOf(stage.name())).setPipelineSize(1)
.setMinBytesRcvd(locatedBlock.getBlock().getNumBytes()).setMaxBytesRcvd(maxBytesRcvd)
.setLatestGenerationStamp(latestGS).setRequestedChecksum(checksumProto)
.setCachingStrategy(CachingStrategyProto.newBuilder().setDropBehind(true).build());
List<Future<Channel>> futureList = new ArrayList<>(datanodeInfos.length);
for (int i = 0; i < datanodeInfos.length; i++) {
@ -464,26 +463,26 @@ public final class FanOutOneBlockAsyncDFSOutputHelper {
futureList.add(promise);
String dnAddr = dnInfo.getXferAddr(connectToDnViaHostname);
new Bootstrap().group(eventLoopGroup).channel(channelClass)
.option(CONNECT_TIMEOUT_MILLIS, timeoutMs).handler(new ChannelInitializer<Channel>() {
.option(CONNECT_TIMEOUT_MILLIS, timeoutMs).handler(new ChannelInitializer<Channel>() {
@Override
protected void initChannel(Channel ch) throws Exception {
// we need to get the remote address of the channel so we can only move on after
// channel connected. Leave an empty implementation here because netty does not allow
// a null handler.
}
}).connect(NetUtils.createSocketAddr(dnAddr)).addListener(new ChannelFutureListener() {
@Override
protected void initChannel(Channel ch) throws Exception {
// we need to get the remote address of the channel so we can only move on after
// channel connected. Leave an empty implementation here because netty does not allow
// a null handler.
}
}).connect(NetUtils.createSocketAddr(dnAddr)).addListener(new ChannelFutureListener() {
@Override
public void operationComplete(ChannelFuture future) throws Exception {
if (future.isSuccess()) {
initialize(conf, future.channel(), dnInfo, storageType, writeBlockProtoBuilder,
timeoutMs, client, locatedBlock.getBlockToken(), promise);
} else {
promise.tryFailure(future.cause());
}
@Override
public void operationComplete(ChannelFuture future) throws Exception {
if (future.isSuccess()) {
initialize(conf, future.channel(), dnInfo, storageType, writeBlockProtoBuilder,
timeoutMs, client, locatedBlock.getBlockToken(), promise);
} else {
promise.tryFailure(future.cause());
}
});
}
});
}
return futureList;
}
@ -513,21 +512,21 @@ public final class FanOutOneBlockAsyncDFSOutputHelper {
}
private static FanOutOneBlockAsyncDFSOutput createOutput(DistributedFileSystem dfs, String src,
boolean overwrite, boolean createParent, short replication, long blockSize,
EventLoopGroup eventLoopGroup, Class<? extends Channel> channelClass,
StreamSlowMonitor monitor) throws IOException {
boolean overwrite, boolean createParent, short replication, long blockSize,
EventLoopGroup eventLoopGroup, Class<? extends Channel> channelClass, StreamSlowMonitor monitor)
throws IOException {
Configuration conf = dfs.getConf();
DFSClient client = dfs.getClient();
String clientName = client.getClientName();
ClientProtocol namenode = client.getNamenode();
int createMaxRetries = conf.getInt(ASYNC_DFS_OUTPUT_CREATE_MAX_RETRIES,
DEFAULT_ASYNC_DFS_OUTPUT_CREATE_MAX_RETRIES);
int createMaxRetries =
conf.getInt(ASYNC_DFS_OUTPUT_CREATE_MAX_RETRIES, DEFAULT_ASYNC_DFS_OUTPUT_CREATE_MAX_RETRIES);
ExcludeDatanodeManager excludeDatanodeManager = monitor.getExcludeDatanodeManager();
Set<DatanodeInfo> toExcludeNodes =
new HashSet<>(excludeDatanodeManager.getExcludeDNs().keySet());
for (int retry = 0;; retry++) {
LOG.debug("When create output stream for {}, exclude list is {}, retry={}", src,
toExcludeNodes, retry);
toExcludeNodes, retry);
HdfsFileStatus stat;
try {
stat = FILE_CREATOR.create(namenode, src,
@ -616,14 +615,14 @@ public final class FanOutOneBlockAsyncDFSOutputHelper {
* inside an {@link EventLoop}.
*/
public static FanOutOneBlockAsyncDFSOutput createOutput(DistributedFileSystem dfs, Path f,
boolean overwrite, boolean createParent, short replication, long blockSize,
EventLoopGroup eventLoopGroup, Class<? extends Channel> channelClass,
final StreamSlowMonitor monitor) throws IOException {
boolean overwrite, boolean createParent, short replication, long blockSize,
EventLoopGroup eventLoopGroup, Class<? extends Channel> channelClass,
final StreamSlowMonitor monitor) throws IOException {
return new FileSystemLinkResolver<FanOutOneBlockAsyncDFSOutput>() {
@Override
public FanOutOneBlockAsyncDFSOutput doCall(Path p)
throws IOException, UnresolvedLinkException {
throws IOException, UnresolvedLinkException {
return createOutput(dfs, p.toUri().getPath(), overwrite, createParent, replication,
blockSize, eventLoopGroup, channelClass, monitor);
}
@ -643,7 +642,7 @@ public final class FanOutOneBlockAsyncDFSOutputHelper {
}
static void completeFile(DFSClient client, ClientProtocol namenode, String src, String clientName,
ExtendedBlock block, long fileId) {
ExtendedBlock block, long fileId) {
for (int retry = 0;; retry++) {
try {
if (namenode.complete(src, clientName, block, fileId)) {

View File

@ -1,4 +1,4 @@
/**
/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
@ -104,7 +104,7 @@ import org.apache.hbase.thirdparty.io.netty.util.concurrent.Promise;
@InterfaceAudience.Private
public final class FanOutOneBlockAsyncDFSOutputSaslHelper {
private static final Logger LOG =
LoggerFactory.getLogger(FanOutOneBlockAsyncDFSOutputSaslHelper.class);
LoggerFactory.getLogger(FanOutOneBlockAsyncDFSOutputSaslHelper.class);
private FanOutOneBlockAsyncDFSOutputSaslHelper() {
}
@ -129,21 +129,21 @@ public final class FanOutOneBlockAsyncDFSOutputSaslHelper {
private interface TransparentCryptoHelper {
Encryptor createEncryptor(Configuration conf, FileEncryptionInfo feInfo, DFSClient client)
throws IOException;
throws IOException;
}
private static final TransparentCryptoHelper TRANSPARENT_CRYPTO_HELPER;
private static SaslAdaptor createSaslAdaptor()
throws NoSuchFieldException, NoSuchMethodException {
throws NoSuchFieldException, NoSuchMethodException {
Field saslPropsResolverField =
SaslDataTransferClient.class.getDeclaredField("saslPropsResolver");
SaslDataTransferClient.class.getDeclaredField("saslPropsResolver");
saslPropsResolverField.setAccessible(true);
Field trustedChannelResolverField =
SaslDataTransferClient.class.getDeclaredField("trustedChannelResolver");
SaslDataTransferClient.class.getDeclaredField("trustedChannelResolver");
trustedChannelResolverField.setAccessible(true);
Field fallbackToSimpleAuthField =
SaslDataTransferClient.class.getDeclaredField("fallbackToSimpleAuth");
SaslDataTransferClient.class.getDeclaredField("fallbackToSimpleAuth");
fallbackToSimpleAuthField.setAccessible(true);
return new SaslAdaptor() {
@ -177,7 +177,7 @@ public final class FanOutOneBlockAsyncDFSOutputSaslHelper {
}
private static TransparentCryptoHelper createTransparentCryptoHelperWithoutHDFS12396()
throws NoSuchMethodException {
throws NoSuchMethodException {
Method decryptEncryptedDataEncryptionKeyMethod = DFSClient.class
.getDeclaredMethod("decryptEncryptedDataEncryptionKey", FileEncryptionInfo.class);
decryptEncryptedDataEncryptionKeyMethod.setAccessible(true);
@ -185,7 +185,7 @@ public final class FanOutOneBlockAsyncDFSOutputSaslHelper {
@Override
public Encryptor createEncryptor(Configuration conf, FileEncryptionInfo feInfo,
DFSClient client) throws IOException {
DFSClient client) throws IOException {
try {
KeyVersion decryptedKey =
(KeyVersion) decryptEncryptedDataEncryptionKeyMethod.invoke(client, feInfo);
@ -206,7 +206,7 @@ public final class FanOutOneBlockAsyncDFSOutputSaslHelper {
}
private static TransparentCryptoHelper createTransparentCryptoHelperWithHDFS12396()
throws ClassNotFoundException, NoSuchMethodException {
throws ClassNotFoundException, NoSuchMethodException {
Class<?> hdfsKMSUtilCls = Class.forName("org.apache.hadoop.hdfs.HdfsKMSUtil");
Method decryptEncryptedDataEncryptionKeyMethod = hdfsKMSUtilCls.getDeclaredMethod(
"decryptEncryptedDataEncryptionKey", FileEncryptionInfo.class, KeyProvider.class);
@ -215,7 +215,7 @@ public final class FanOutOneBlockAsyncDFSOutputSaslHelper {
@Override
public Encryptor createEncryptor(Configuration conf, FileEncryptionInfo feInfo,
DFSClient client) throws IOException {
DFSClient client) throws IOException {
try {
KeyVersion decryptedKey = (KeyVersion) decryptEncryptedDataEncryptionKeyMethod
.invoke(null, feInfo, client.getKeyProvider());
@ -236,12 +236,12 @@ public final class FanOutOneBlockAsyncDFSOutputSaslHelper {
}
private static TransparentCryptoHelper createTransparentCryptoHelper()
throws NoSuchMethodException, ClassNotFoundException {
throws NoSuchMethodException, ClassNotFoundException {
try {
return createTransparentCryptoHelperWithoutHDFS12396();
} catch (NoSuchMethodException e) {
LOG.debug("No decryptEncryptedDataEncryptionKey method in DFSClient," +
" should be hadoop version with HDFS-12396", e);
LOG.debug("No decryptEncryptedDataEncryptionKey method in DFSClient,"
+ " should be hadoop version with HDFS-12396", e);
}
return createTransparentCryptoHelperWithHDFS12396();
}
@ -252,8 +252,8 @@ public final class FanOutOneBlockAsyncDFSOutputSaslHelper {
TRANSPARENT_CRYPTO_HELPER = createTransparentCryptoHelper();
} catch (Exception e) {
String msg = "Couldn't properly initialize access to HDFS internals. Please "
+ "update your WAL Provider to not make use of the 'asyncfs' provider. See "
+ "HBASE-16110 for more information.";
+ "update your WAL Provider to not make use of the 'asyncfs' provider. See "
+ "HBASE-16110 for more information.";
LOG.error(msg, e);
throw new Error(msg, e);
}
@ -324,8 +324,8 @@ public final class FanOutOneBlockAsyncDFSOutputSaslHelper {
private int step = 0;
public SaslNegotiateHandler(Configuration conf, String username, char[] password,
Map<String, String> saslProps, int timeoutMs, Promise<Void> promise,
DFSClient dfsClient) throws SaslException {
Map<String, String> saslProps, int timeoutMs, Promise<Void> promise, DFSClient dfsClient)
throws SaslException {
this.conf = conf;
this.saslProps = saslProps;
this.saslClient = Sasl.createSaslClient(new String[] { MECHANISM }, username, PROTOCOL,
@ -355,8 +355,8 @@ public final class FanOutOneBlockAsyncDFSOutputSaslHelper {
}
/**
* The asyncfs subsystem emulates a HDFS client by sending protobuf messages via netty.
* After Hadoop 3.3.0, the protobuf classes are relocated to org.apache.hadoop.thirdparty.protobuf.*.
* The asyncfs subsystem emulates a HDFS client by sending protobuf messages via netty. After
* Hadoop 3.3.0, the protobuf classes are relocated to org.apache.hadoop.thirdparty.protobuf.*.
* Use Reflection to check which ones to use.
*/
private static class BuilderPayloadSetter {
@ -366,13 +366,11 @@ public final class FanOutOneBlockAsyncDFSOutputSaslHelper {
/**
* Create a ByteString from byte array without copying (wrap), and then set it as the payload
* for the builder.
*
* @param builder builder for HDFS DataTransferEncryptorMessage.
* @param payload byte array of payload.
* @throws IOException
* @param payload byte array of payload. n
*/
static void wrapAndSetPayload(DataTransferEncryptorMessageProto.Builder builder, byte[] payload)
throws IOException {
static void wrapAndSetPayload(DataTransferEncryptorMessageProto.Builder builder,
byte[] payload) throws IOException {
Object byteStringObject;
try {
// byteStringObject = new LiteralByteString(payload);
@ -396,18 +394,18 @@ public final class FanOutOneBlockAsyncDFSOutputSaslHelper {
try {
// See if it can load the relocated ByteString, which comes from hadoop-thirdparty.
byteStringClass = Class.forName("org.apache.hadoop.thirdparty.protobuf.ByteString");
LOG.debug("Found relocated ByteString class from hadoop-thirdparty." +
" Assuming this is Hadoop 3.3.0+.");
LOG.debug("Found relocated ByteString class from hadoop-thirdparty."
+ " Assuming this is Hadoop 3.3.0+.");
} catch (ClassNotFoundException e) {
LOG.debug("Did not find relocated ByteString class from hadoop-thirdparty." +
" Assuming this is below Hadoop 3.3.0", e);
LOG.debug("Did not find relocated ByteString class from hadoop-thirdparty."
+ " Assuming this is below Hadoop 3.3.0", e);
}
// LiteralByteString is a package private class in protobuf. Make it accessible.
Class<?> literalByteStringClass;
try {
literalByteStringClass = Class.forName(
"org.apache.hadoop.thirdparty.protobuf.ByteString$LiteralByteString");
literalByteStringClass =
Class.forName("org.apache.hadoop.thirdparty.protobuf.ByteString$LiteralByteString");
LOG.debug("Shaded LiteralByteString from hadoop-thirdparty is found.");
} catch (ClassNotFoundException e) {
try {
@ -435,9 +433,9 @@ public final class FanOutOneBlockAsyncDFSOutputSaslHelper {
}
private void sendSaslMessage(ChannelHandlerContext ctx, byte[] payload,
List<CipherOption> options) throws IOException {
List<CipherOption> options) throws IOException {
DataTransferEncryptorMessageProto.Builder builder =
DataTransferEncryptorMessageProto.newBuilder();
DataTransferEncryptorMessageProto.newBuilder();
builder.setStatus(DataTransferEncryptorStatus.SUCCESS);
if (payload != null) {
BuilderPayloadSetter.wrapAndSetPayload(builder, payload);
@ -486,7 +484,7 @@ public final class FanOutOneBlockAsyncDFSOutputSaslHelper {
private boolean requestedQopContainsPrivacy() {
Set<String> requestedQop =
ImmutableSet.copyOf(Arrays.asList(saslProps.get(Sasl.QOP).split(",")));
ImmutableSet.copyOf(Arrays.asList(saslProps.get(Sasl.QOP).split(",")));
return requestedQop.contains("auth-conf");
}
@ -495,15 +493,14 @@ public final class FanOutOneBlockAsyncDFSOutputSaslHelper {
throw new IOException("Failed to complete SASL handshake");
}
Set<String> requestedQop =
ImmutableSet.copyOf(Arrays.asList(saslProps.get(Sasl.QOP).split(",")));
ImmutableSet.copyOf(Arrays.asList(saslProps.get(Sasl.QOP).split(",")));
String negotiatedQop = getNegotiatedQop();
LOG.debug(
"Verifying QOP, requested QOP = " + requestedQop + ", negotiated QOP = " + negotiatedQop);
if (!requestedQop.contains(negotiatedQop)) {
throw new IOException(String.format("SASL handshake completed, but "
+ "channel does not have acceptable quality of protection, "
+ "requested = %s, negotiated = %s",
requestedQop, negotiatedQop));
+ "channel does not have acceptable quality of protection, "
+ "requested = %s, negotiated = %s", requestedQop, negotiatedQop));
}
}
@ -522,13 +519,13 @@ public final class FanOutOneBlockAsyncDFSOutputSaslHelper {
outKey = saslClient.unwrap(outKey, 0, outKey.length);
}
return new CipherOption(option.getCipherSuite(), inKey, option.getInIv(), outKey,
option.getOutIv());
option.getOutIv());
}
private CipherOption getCipherOption(DataTransferEncryptorMessageProto proto,
boolean isNegotiatedQopPrivacy, SaslClient saslClient) throws IOException {
boolean isNegotiatedQopPrivacy, SaslClient saslClient) throws IOException {
List<CipherOption> cipherOptions =
PBHelperClient.convertCipherOptionProtos(proto.getCipherOptionList());
PBHelperClient.convertCipherOptionProtos(proto.getCipherOptionList());
if (cipherOptions == null || cipherOptions.isEmpty()) {
return null;
}
@ -558,7 +555,7 @@ public final class FanOutOneBlockAsyncDFSOutputSaslHelper {
assert response == null;
checkSaslComplete();
CipherOption cipherOption =
getCipherOption(proto, isNegotiatedQopPrivacy(), saslClient);
getCipherOption(proto, isNegotiatedQopPrivacy(), saslClient);
ChannelPipeline p = ctx.pipeline();
while (p.first() != null) {
p.removeFirst();
@ -639,7 +636,7 @@ public final class FanOutOneBlockAsyncDFSOutputSaslHelper {
@Override
public void write(ChannelHandlerContext ctx, Object msg, ChannelPromise promise)
throws Exception {
throws Exception {
if (msg instanceof ByteBuf) {
ByteBuf buf = (ByteBuf) msg;
cBuf.addComponent(buf);
@ -676,7 +673,7 @@ public final class FanOutOneBlockAsyncDFSOutputSaslHelper {
private final Decryptor decryptor;
public DecryptHandler(CryptoCodec codec, byte[] key, byte[] iv)
throws GeneralSecurityException, IOException {
throws GeneralSecurityException, IOException {
this.decryptor = codec.createDecryptor();
this.decryptor.init(key, Arrays.copyOf(iv, iv.length));
}
@ -709,14 +706,14 @@ public final class FanOutOneBlockAsyncDFSOutputSaslHelper {
private final Encryptor encryptor;
public EncryptHandler(CryptoCodec codec, byte[] key, byte[] iv)
throws GeneralSecurityException, IOException {
throws GeneralSecurityException, IOException {
this.encryptor = codec.createEncryptor();
this.encryptor.init(key, Arrays.copyOf(iv, iv.length));
}
@Override
protected ByteBuf allocateBuffer(ChannelHandlerContext ctx, ByteBuf msg, boolean preferDirect)
throws Exception {
throws Exception {
if (preferDirect) {
return ctx.alloc().directBuffer(msg.readableBytes());
} else {
@ -747,7 +744,7 @@ public final class FanOutOneBlockAsyncDFSOutputSaslHelper {
private static String getUserNameFromEncryptionKey(DataEncryptionKey encryptionKey) {
return encryptionKey.keyId + NAME_DELIMITER + encryptionKey.blockPoolId + NAME_DELIMITER
+ Base64.getEncoder().encodeToString(encryptionKey.nonce);
+ Base64.getEncoder().encodeToString(encryptionKey.nonce);
}
private static char[] encryptionKeyToPassword(byte[] encryptionKey) {
@ -771,26 +768,26 @@ public final class FanOutOneBlockAsyncDFSOutputSaslHelper {
}
private static void doSaslNegotiation(Configuration conf, Channel channel, int timeoutMs,
String username, char[] password, Map<String, String> saslProps, Promise<Void> saslPromise,
DFSClient dfsClient) {
String username, char[] password, Map<String, String> saslProps, Promise<Void> saslPromise,
DFSClient dfsClient) {
try {
channel.pipeline().addLast(new IdleStateHandler(timeoutMs, 0, 0, TimeUnit.MILLISECONDS),
new ProtobufVarint32FrameDecoder(),
new ProtobufDecoder(DataTransferEncryptorMessageProto.getDefaultInstance()),
new SaslNegotiateHandler(conf, username, password, saslProps, timeoutMs, saslPromise,
dfsClient));
dfsClient));
} catch (SaslException e) {
saslPromise.tryFailure(e);
}
}
static void trySaslNegotiate(Configuration conf, Channel channel, DatanodeInfo dnInfo,
int timeoutMs, DFSClient client, Token<BlockTokenIdentifier> accessToken,
Promise<Void> saslPromise) throws IOException {
int timeoutMs, DFSClient client, Token<BlockTokenIdentifier> accessToken,
Promise<Void> saslPromise) throws IOException {
SaslDataTransferClient saslClient = client.getSaslDataTransferClient();
SaslPropertiesResolver saslPropsResolver = SASL_ADAPTOR.getSaslPropsResolver(saslClient);
TrustedChannelResolver trustedChannelResolver =
SASL_ADAPTOR.getTrustedChannelResolver(saslClient);
SASL_ADAPTOR.getTrustedChannelResolver(saslClient);
AtomicBoolean fallbackToSimpleAuth = SASL_ADAPTOR.getFallbackToSimpleAuth(saslClient);
InetAddress addr = ((InetSocketAddress) channel.remoteAddress()).getAddress();
if (trustedChannelResolver.isTrusted() || trustedChannelResolver.isTrusted(addr)) {
@ -805,24 +802,23 @@ public final class FanOutOneBlockAsyncDFSOutputSaslHelper {
}
doSaslNegotiation(conf, channel, timeoutMs, getUserNameFromEncryptionKey(encryptionKey),
encryptionKeyToPassword(encryptionKey.encryptionKey),
createSaslPropertiesForEncryption(encryptionKey.encryptionAlgorithm), saslPromise,
client);
createSaslPropertiesForEncryption(encryptionKey.encryptionAlgorithm), saslPromise, client);
} else if (!UserGroupInformation.isSecurityEnabled()) {
if (LOG.isDebugEnabled()) {
LOG.debug("SASL client skipping handshake in unsecured configuration for addr = " + addr
+ ", datanodeId = " + dnInfo);
+ ", datanodeId = " + dnInfo);
}
saslPromise.trySuccess(null);
} else if (dnInfo.getXferPort() < 1024) {
if (LOG.isDebugEnabled()) {
LOG.debug("SASL client skipping handshake in secured configuration with "
+ "privileged port for addr = " + addr + ", datanodeId = " + dnInfo);
+ "privileged port for addr = " + addr + ", datanodeId = " + dnInfo);
}
saslPromise.trySuccess(null);
} else if (fallbackToSimpleAuth != null && fallbackToSimpleAuth.get()) {
if (LOG.isDebugEnabled()) {
LOG.debug("SASL client skipping handshake in secured configuration with "
+ "unsecured cluster for addr = " + addr + ", datanodeId = " + dnInfo);
+ "unsecured cluster for addr = " + addr + ", datanodeId = " + dnInfo);
}
saslPromise.trySuccess(null);
} else if (saslPropsResolver != null) {
@ -832,21 +828,21 @@ public final class FanOutOneBlockAsyncDFSOutputSaslHelper {
}
doSaslNegotiation(conf, channel, timeoutMs, buildUsername(accessToken),
buildClientPassword(accessToken), saslPropsResolver.getClientProperties(addr), saslPromise,
client);
client);
} else {
// It's a secured cluster using non-privileged ports, but no SASL. The only way this can
// happen is if the DataNode has ignore.secure.ports.for.testing configured, so this is a rare
// edge case.
if (LOG.isDebugEnabled()) {
LOG.debug("SASL client skipping handshake in secured configuration with no SASL "
+ "protection configured for addr = " + addr + ", datanodeId = " + dnInfo);
+ "protection configured for addr = " + addr + ", datanodeId = " + dnInfo);
}
saslPromise.trySuccess(null);
}
}
static Encryptor createEncryptor(Configuration conf, HdfsFileStatus stat, DFSClient client)
throws IOException {
throws IOException {
FileEncryptionInfo feInfo = stat.getFileEncryptionInfo();
if (feInfo == null) {
return null;

View File

@ -1,4 +1,4 @@
/**
/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
@ -17,33 +17,29 @@
*/
package org.apache.hadoop.hbase.io.asyncfs;
import java.lang.reflect.InvocationTargetException;
import java.lang.reflect.Method;
import java.util.List;
import org.apache.yetus.audience.InterfaceAudience;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.apache.hbase.thirdparty.io.netty.buffer.ByteBuf;
import org.apache.hbase.thirdparty.io.netty.buffer.ByteBufUtil;
import org.apache.hbase.thirdparty.io.netty.channel.ChannelHandlerContext;
import org.apache.hbase.thirdparty.io.netty.handler.codec.MessageToMessageDecoder;
import org.apache.hbase.thirdparty.io.netty.util.internal.ObjectUtil;
import org.apache.yetus.audience.InterfaceAudience;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import java.lang.reflect.InvocationTargetException;
import java.lang.reflect.Method;
import java.util.List;
/**
* Modified based on io.netty.handler.codec.protobuf.ProtobufDecoder.
* The Netty's ProtobufDecode supports unshaded protobuf messages (com.google.protobuf).
*
* Hadoop 3.3.0 and above relocates protobuf classes to a shaded jar (hadoop-thirdparty), and
* so we must use reflection to detect which one (relocated or not) to use.
*
* Do not use this to process HBase's shaded protobuf messages. This is meant to process the
* protobuf messages in HDFS for the asyncfs use case.
* */
* Modified based on io.netty.handler.codec.protobuf.ProtobufDecoder. The Netty's ProtobufDecode
* supports unshaded protobuf messages (com.google.protobuf). Hadoop 3.3.0 and above relocates
* protobuf classes to a shaded jar (hadoop-thirdparty), and so we must use reflection to detect
* which one (relocated or not) to use. Do not use this to process HBase's shaded protobuf messages.
* This is meant to process the protobuf messages in HDFS for the asyncfs use case.
*/
@InterfaceAudience.Private
public class ProtobufDecoder extends MessageToMessageDecoder<ByteBuf> {
private static final Logger LOG =
LoggerFactory.getLogger(ProtobufDecoder.class);
private static final Logger LOG = LoggerFactory.getLogger(ProtobufDecoder.class);
private static Class<?> protobufMessageLiteClass = null;
private static Class<?> protobufMessageLiteBuilderClass = null;
@ -60,23 +56,22 @@ public class ProtobufDecoder extends MessageToMessageDecoder<ByteBuf> {
private Object parser;
private Object builder;
public ProtobufDecoder(Object prototype) {
try {
Method getDefaultInstanceForTypeMethod = protobufMessageLiteClass.getMethod(
"getDefaultInstanceForType");
Object prototype1 = getDefaultInstanceForTypeMethod
.invoke(ObjectUtil.checkNotNull(prototype, "prototype"));
Method getDefaultInstanceForTypeMethod =
protobufMessageLiteClass.getMethod("getDefaultInstanceForType");
Object prototype1 =
getDefaultInstanceForTypeMethod.invoke(ObjectUtil.checkNotNull(prototype, "prototype"));
// parser = prototype.getParserForType()
parser = getParserForTypeMethod.invoke(prototype1);
parseFromMethod = parser.getClass().getMethod(
"parseFrom", byte[].class, int.class, int.class);
parseFromMethod =
parser.getClass().getMethod("parseFrom", byte[].class, int.class, int.class);
// builder = prototype.newBuilderForType();
builder = newBuilderForTypeMethod.invoke(prototype1);
mergeFromMethod = builder.getClass().getMethod(
"mergeFrom", byte[].class, int.class, int.class);
mergeFromMethod =
builder.getClass().getMethod("mergeFrom", byte[].class, int.class, int.class);
// All protobuf message builders inherits from MessageLite.Builder
buildMethod = protobufMessageLiteBuilderClass.getDeclaredMethod("build");
@ -88,8 +83,7 @@ public class ProtobufDecoder extends MessageToMessageDecoder<ByteBuf> {
}
}
protected void decode(
ChannelHandlerContext ctx, ByteBuf msg, List<Object> out) throws Exception {
protected void decode(ChannelHandlerContext ctx, ByteBuf msg, List<Object> out) throws Exception {
int length = msg.readableBytes();
byte[] array;
int offset;
@ -122,8 +116,8 @@ public class ProtobufDecoder extends MessageToMessageDecoder<ByteBuf> {
try {
protobufMessageLiteClass = Class.forName("org.apache.hadoop.thirdparty.protobuf.MessageLite");
protobufMessageLiteBuilderClass = Class.forName(
"org.apache.hadoop.thirdparty.protobuf.MessageLite$Builder");
protobufMessageLiteBuilderClass =
Class.forName("org.apache.hadoop.thirdparty.protobuf.MessageLite$Builder");
LOG.debug("Hadoop 3.3 and above shades protobuf.");
} catch (ClassNotFoundException e) {
LOG.debug("Hadoop 3.2 and below use unshaded protobuf.", e);

View File

@ -1,4 +1,4 @@
/**
/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information

View File

@ -1,4 +1,4 @@
/**
/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
@ -22,7 +22,6 @@ import java.nio.ByteBuffer;
import java.util.concurrent.CompletableFuture;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
import org.apache.hadoop.fs.FSDataOutputStream;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.hbase.io.ByteArrayOutputStream;
@ -50,7 +49,7 @@ public class WrapperAsyncFSOutput implements AsyncFSOutput {
public WrapperAsyncFSOutput(Path file, FSDataOutputStream out) {
this.out = out;
this.executor = Executors.newSingleThreadExecutor(new ThreadFactoryBuilder().setDaemon(true)
.setNameFormat("AsyncFSOutputFlusher-" + file.toString().replace("%", "%%")).build());
.setNameFormat("AsyncFSOutputFlusher-" + file.toString().replace("%", "%%")).build());
}
@Override
@ -95,8 +94,8 @@ public class WrapperAsyncFSOutput implements AsyncFSOutput {
}
long pos = out.getPos();
/**
* This flush0 method could only be called by single thread, so here we could
* safely overwrite without any synchronization.
* This flush0 method could only be called by single thread, so here we could safely overwrite
* without any synchronization.
*/
this.syncedLength = pos;
future.complete(pos);

View File

@ -56,24 +56,23 @@ public class ExcludeDatanodeManager implements ConfigurationObserver {
private final int maxExcludeDNCount;
private final Configuration conf;
// This is a map of providerId->StreamSlowMonitor
private final Map<String, StreamSlowMonitor> streamSlowMonitors =
new ConcurrentHashMap<>(1);
private final Map<String, StreamSlowMonitor> streamSlowMonitors = new ConcurrentHashMap<>(1);
public ExcludeDatanodeManager(Configuration conf) {
this.conf = conf;
this.maxExcludeDNCount = conf.getInt(WAL_MAX_EXCLUDE_SLOW_DATANODE_COUNT_KEY,
DEFAULT_WAL_MAX_EXCLUDE_SLOW_DATANODE_COUNT);
this.excludeDNsCache = CacheBuilder.newBuilder()
.expireAfterWrite(this.conf.getLong(WAL_EXCLUDE_DATANODE_TTL_KEY,
DEFAULT_WAL_EXCLUDE_DATANODE_TTL), TimeUnit.HOURS)
.maximumSize(this.maxExcludeDNCount)
.build();
.expireAfterWrite(
this.conf.getLong(WAL_EXCLUDE_DATANODE_TTL_KEY, DEFAULT_WAL_EXCLUDE_DATANODE_TTL),
TimeUnit.HOURS)
.maximumSize(this.maxExcludeDNCount).build();
}
/**
* Try to add a datanode to the regionserver excluding cache
* @param datanodeInfo the datanode to be added to the excluded cache
* @param cause the cause that the datanode is hope to be excluded
* @param cause the cause that the datanode is hope to be excluded
* @return True if the datanode is added to the regionserver excluding cache, false otherwise
*/
public boolean tryAddExcludeDN(DatanodeInfo datanodeInfo, String cause) {
@ -85,15 +84,15 @@ public class ExcludeDatanodeManager implements ConfigurationObserver {
datanodeInfo, cause, excludeDNsCache.size());
return true;
}
LOG.debug("Try add datanode {} to exclude cache by [{}] failed, "
+ "current exclude DNs are {}", datanodeInfo, cause, getExcludeDNs().keySet());
LOG.debug(
"Try add datanode {} to exclude cache by [{}] failed, " + "current exclude DNs are {}",
datanodeInfo, cause, getExcludeDNs().keySet());
return false;
}
public StreamSlowMonitor getStreamSlowMonitor(String name) {
String key = name == null || name.isEmpty() ? "defaultMonitorName" : name;
return streamSlowMonitors
.computeIfAbsent(key, k -> new StreamSlowMonitor(conf, key, this));
return streamSlowMonitors.computeIfAbsent(key, k -> new StreamSlowMonitor(conf, key, this));
}
public Map<DatanodeInfo, Long> getExcludeDNs() {
@ -105,10 +104,12 @@ public class ExcludeDatanodeManager implements ConfigurationObserver {
for (StreamSlowMonitor monitor : streamSlowMonitors.values()) {
monitor.onConfigurationChange(conf);
}
this.excludeDNsCache = CacheBuilder.newBuilder().expireAfterWrite(
this.conf.getLong(WAL_EXCLUDE_DATANODE_TTL_KEY, DEFAULT_WAL_EXCLUDE_DATANODE_TTL),
TimeUnit.HOURS).maximumSize(this.conf
.getInt(WAL_MAX_EXCLUDE_SLOW_DATANODE_COUNT_KEY, DEFAULT_WAL_MAX_EXCLUDE_SLOW_DATANODE_COUNT))
this.excludeDNsCache = CacheBuilder.newBuilder()
.expireAfterWrite(
this.conf.getLong(WAL_EXCLUDE_DATANODE_TTL_KEY, DEFAULT_WAL_EXCLUDE_DATANODE_TTL),
TimeUnit.HOURS)
.maximumSize(this.conf.getInt(WAL_MAX_EXCLUDE_SLOW_DATANODE_COUNT_KEY,
DEFAULT_WAL_MAX_EXCLUDE_SLOW_DATANODE_COUNT))
.build();
}
}

View File

@ -38,18 +38,16 @@ import org.apache.hbase.thirdparty.com.google.common.cache.CacheLoader;
import org.apache.hbase.thirdparty.com.google.common.cache.LoadingCache;
/**
* Class for monitor the wal file flush performance.
* Each active wal file has a StreamSlowMonitor.
* Class for monitor the wal file flush performance. Each active wal file has a StreamSlowMonitor.
*/
@InterfaceAudience.Private
public class StreamSlowMonitor implements ConfigurationObserver {
private static final Logger LOG = LoggerFactory.getLogger(StreamSlowMonitor.class);
/**
* Configure for the min count for a datanode detected slow.
* If a datanode is detected slow times up to this count, then it will be added to the exclude
* datanode cache by {@link ExcludeDatanodeManager#tryAddExcludeDN(DatanodeInfo, String)}
* of this regionsever.
* Configure for the min count for a datanode detected slow. If a datanode is detected slow times
* up to this count, then it will be added to the exclude datanode cache by
* {@link ExcludeDatanodeManager#tryAddExcludeDN(DatanodeInfo, String)} of this regionsever.
*/
private static final String WAL_SLOW_DETECT_MIN_COUNT_KEY =
"hbase.regionserver.async.wal.min.slow.detect.count";
@ -63,9 +61,9 @@ public class StreamSlowMonitor implements ConfigurationObserver {
private static final long DEFAULT_WAL_SLOW_DETECT_DATA_TTL = 10 * 60 * 1000; // 10min in ms
/**
* Configure for the speed check of packet min length.
* For packets whose data length smaller than this value, check slow by processing time.
* While for packets whose data length larger than this value, check slow by flushing speed.
* Configure for the speed check of packet min length. For packets whose data length smaller than
* this value, check slow by processing time. While for packets whose data length larger than this
* value, check slow by flushing speed.
*/
private static final String DATANODE_PACKET_FLUSH_CHECK_SPEED_MIN_DATA_LENGTH_KEY =
"hbase.regionserver.async.wal.datanode.slow.check.speed.packet.data.length.min";
@ -73,8 +71,8 @@ public class StreamSlowMonitor implements ConfigurationObserver {
private static final long DEFAULT_DATANODE_PACKET_FLUSH_CHECK_SPEED_MIN_DATA_LENGTH = 64 * 1024;
/**
* Configure for the slow packet process time, a duration from send to ACK.
* The processing time check is for packets that data length smaller than
* Configure for the slow packet process time, a duration from send to ACK. The processing time
* check is for packets that data length smaller than
* {@link StreamSlowMonitor#DATANODE_PACKET_FLUSH_CHECK_SPEED_MIN_DATA_LENGTH_KEY}
*/
public static final String DATANODE_SLOW_PACKET_PROCESS_TIME_KEY =
@ -105,15 +103,16 @@ public class StreamSlowMonitor implements ConfigurationObserver {
private long minLengthForSpeedCheck;
public StreamSlowMonitor(Configuration conf, String name,
ExcludeDatanodeManager excludeDatanodeManager) {
ExcludeDatanodeManager excludeDatanodeManager) {
setConf(conf);
this.name = name;
this.excludeDatanodeManager = excludeDatanodeManager;
this.datanodeSlowDataQueue = CacheBuilder.newBuilder()
.maximumSize(conf.getInt(WAL_MAX_EXCLUDE_SLOW_DATANODE_COUNT_KEY,
DEFAULT_WAL_MAX_EXCLUDE_SLOW_DATANODE_COUNT))
.expireAfterWrite(conf.getLong(WAL_EXCLUDE_DATANODE_TTL_KEY,
DEFAULT_WAL_EXCLUDE_DATANODE_TTL), TimeUnit.HOURS)
.expireAfterWrite(
conf.getLong(WAL_EXCLUDE_DATANODE_TTL_KEY, DEFAULT_WAL_EXCLUDE_DATANODE_TTL),
TimeUnit.HOURS)
.build(new CacheLoader<DatanodeInfo, Deque<PacketAckData>>() {
@Override
public Deque<PacketAckData> load(DatanodeInfo key) throws Exception {
@ -129,30 +128,33 @@ public class StreamSlowMonitor implements ConfigurationObserver {
/**
* Check if the packet process time shows that the relevant datanode is a slow node.
* @param datanodeInfo the datanode that processed the packet
* @param packetDataLen the data length of the packet (in bytes)
* @param processTimeMs the process time (in ms) of the packet on the datanode,
* @param datanodeInfo the datanode that processed the packet
* @param packetDataLen the data length of the packet (in bytes)
* @param processTimeMs the process time (in ms) of the packet on the datanode,
* @param lastAckTimestamp the last acked timestamp of the packet on another datanode
* @param unfinished if the packet is unfinished flushed to the datanode replicas
* @param unfinished if the packet is unfinished flushed to the datanode replicas
*/
public void checkProcessTimeAndSpeed(DatanodeInfo datanodeInfo, long packetDataLen,
long processTimeMs, long lastAckTimestamp, int unfinished) {
long processTimeMs, long lastAckTimestamp, int unfinished) {
long current = EnvironmentEdgeManager.currentTime();
// Here are two conditions used to determine whether a datanode is slow,
// 1. For small packet, we just have a simple time limit, without considering
// the size of the packet.
// 2. For large packet, we will calculate the speed, and check if the speed is too slow.
boolean slow = (packetDataLen <= minLengthForSpeedCheck && processTimeMs > slowPacketAckMs) || (
packetDataLen > minLengthForSpeedCheck
boolean slow = (packetDataLen <= minLengthForSpeedCheck && processTimeMs > slowPacketAckMs)
|| (packetDataLen > minLengthForSpeedCheck
&& (double) packetDataLen / processTimeMs < minPacketFlushSpeedKBs);
if (slow) {
// Check if large diff ack timestamp between replicas,
// should try to avoid misjudgments that caused by GC STW.
if ((lastAckTimestamp > 0 && current - lastAckTimestamp > slowPacketAckMs / 2) || (
lastAckTimestamp <= 0 && unfinished == 0)) {
LOG.info("Slow datanode: {}, data length={}, duration={}ms, unfinishedReplicas={}, "
+ "lastAckTimestamp={}, monitor name: {}", datanodeInfo, packetDataLen, processTimeMs,
unfinished, lastAckTimestamp, this.name);
if (
(lastAckTimestamp > 0 && current - lastAckTimestamp > slowPacketAckMs / 2)
|| (lastAckTimestamp <= 0 && unfinished == 0)
) {
LOG.info(
"Slow datanode: {}, data length={}, duration={}ms, unfinishedReplicas={}, "
+ "lastAckTimestamp={}, monitor name: {}",
datanodeInfo, packetDataLen, processTimeMs, unfinished, lastAckTimestamp, this.name);
if (addSlowAckData(datanodeInfo, packetDataLen, processTimeMs)) {
excludeDatanodeManager.tryAddExcludeDN(datanodeInfo, "slow packet ack");
}
@ -168,8 +170,10 @@ public class StreamSlowMonitor implements ConfigurationObserver {
private boolean addSlowAckData(DatanodeInfo datanodeInfo, long dataLength, long processTime) {
Deque<PacketAckData> slowDNQueue = datanodeSlowDataQueue.getUnchecked(datanodeInfo);
long current = EnvironmentEdgeManager.currentTime();
while (!slowDNQueue.isEmpty() && (current - slowDNQueue.getFirst().getTimestamp() > slowDataTtl
|| slowDNQueue.size() >= minSlowDetectCount)) {
while (
!slowDNQueue.isEmpty() && (current - slowDNQueue.getFirst().getTimestamp() > slowDataTtl
|| slowDNQueue.size() >= minSlowDetectCount)
) {
slowDNQueue.removeFirst();
}
slowDNQueue.addLast(new PacketAckData(dataLength, processTime));
@ -177,13 +181,13 @@ public class StreamSlowMonitor implements ConfigurationObserver {
}
private void setConf(Configuration conf) {
this.minSlowDetectCount = conf.getInt(WAL_SLOW_DETECT_MIN_COUNT_KEY,
DEFAULT_WAL_SLOW_DETECT_MIN_COUNT);
this.minSlowDetectCount =
conf.getInt(WAL_SLOW_DETECT_MIN_COUNT_KEY, DEFAULT_WAL_SLOW_DETECT_MIN_COUNT);
this.slowDataTtl = conf.getLong(WAL_SLOW_DETECT_DATA_TTL_KEY, DEFAULT_WAL_SLOW_DETECT_DATA_TTL);
this.slowPacketAckMs = conf.getLong(DATANODE_SLOW_PACKET_PROCESS_TIME_KEY,
DEFAULT_DATANODE_SLOW_PACKET_PROCESS_TIME);
this.minLengthForSpeedCheck = conf.getLong(
DATANODE_PACKET_FLUSH_CHECK_SPEED_MIN_DATA_LENGTH_KEY,
DEFAULT_DATANODE_SLOW_PACKET_PROCESS_TIME);
this.minLengthForSpeedCheck =
conf.getLong(DATANODE_PACKET_FLUSH_CHECK_SPEED_MIN_DATA_LENGTH_KEY,
DEFAULT_DATANODE_PACKET_FLUSH_CHECK_SPEED_MIN_DATA_LENGTH);
this.minPacketFlushSpeedKBs = conf.getDouble(DATANODE_SLOW_PACKET_FLUSH_MIN_SPEED_KEY,
DEFAULT_DATANODE_SLOW_PACKET_FLUSH_MIN_SPEED);

View File

@ -1,5 +1,4 @@
/*
*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
@ -21,8 +20,8 @@ package org.apache.hadoop.hbase.util;
import org.apache.yetus.audience.InterfaceAudience;
/**
* Similar interface as {@link org.apache.hadoop.util.Progressable} but returns
* a boolean to support canceling the operation.
* Similar interface as {@link org.apache.hadoop.util.Progressable} but returns a boolean to support
* canceling the operation.
* <p/>
* Used for doing updating of OPENING znode during log replay on region open.
*/
@ -30,8 +29,8 @@ import org.apache.yetus.audience.InterfaceAudience;
public interface CancelableProgressable {
/**
* Report progress. Returns true if operations should continue, false if the
* operation should be canceled and rolled back.
* Report progress. Returns true if operations should continue, false if the operation should be
* canceled and rolled back.
* @return whether to continue (true) or cancel (false) the operation
*/
boolean progress();

View File

@ -1,4 +1,4 @@
/**
/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
@ -120,8 +120,10 @@ public final class RecoverLeaseFSUtils {
// Cycle here until (subsequentPause * nbAttempt) elapses. While spinning, check
// isFileClosed if available (should be in hadoop 2.0.5... not in hadoop 1 though.
long localStartWaiting = EnvironmentEdgeManager.currentTime();
while ((EnvironmentEdgeManager.currentTime() - localStartWaiting) < subsequentPauseBase *
nbAttempt) {
while (
(EnvironmentEdgeManager.currentTime() - localStartWaiting)
< subsequentPauseBase * nbAttempt
) {
Thread.sleep(conf.getInt("hbase.lease.recovery.pause", 1000));
if (findIsFileClosedMeth) {
try {
@ -152,10 +154,10 @@ public final class RecoverLeaseFSUtils {
private static boolean checkIfTimedout(final Configuration conf, final long recoveryTimeout,
final int nbAttempt, final Path p, final long startWaiting) {
if (recoveryTimeout < EnvironmentEdgeManager.currentTime()) {
LOG.warn("Cannot recoverLease after trying for " +
conf.getInt("hbase.lease.recovery.timeout", 900000) +
"ms (hbase.lease.recovery.timeout); continuing, but may be DATALOSS!!!; " +
getLogMessageDetail(nbAttempt, p, startWaiting));
LOG.warn("Cannot recoverLease after trying for "
+ conf.getInt("hbase.lease.recovery.timeout", 900000)
+ "ms (hbase.lease.recovery.timeout); continuing, but may be DATALOSS!!!; "
+ getLogMessageDetail(nbAttempt, p, startWaiting));
return true;
}
return false;
@ -170,8 +172,8 @@ public final class RecoverLeaseFSUtils {
boolean recovered = false;
try {
recovered = dfs.recoverLease(p);
LOG.info((recovered ? "Recovered lease, " : "Failed to recover lease, ") +
getLogMessageDetail(nbAttempt, p, startWaiting));
LOG.info((recovered ? "Recovered lease, " : "Failed to recover lease, ")
+ getLogMessageDetail(nbAttempt, p, startWaiting));
} catch (IOException e) {
if (e instanceof LeaseExpiredException && e.getMessage().contains("File does not exist")) {
// This exception comes out instead of FNFE, fix it
@ -189,8 +191,8 @@ public final class RecoverLeaseFSUtils {
*/
private static String getLogMessageDetail(final int nbAttempt, final Path p,
final long startWaiting) {
return "attempt=" + nbAttempt + " on file=" + p + " after " +
(EnvironmentEdgeManager.currentTime() - startWaiting) + "ms";
return "attempt=" + nbAttempt + " on file=" + p + " after "
+ (EnvironmentEdgeManager.currentTime() - startWaiting) + "ms";
}
/**

View File

@ -1,4 +1,4 @@
/**
/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information

View File

@ -19,6 +19,7 @@ package org.apache.hadoop.hbase.io.asyncfs;
import static org.junit.Assert.assertEquals;
import static org.junit.Assert.assertTrue;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.hbase.HBaseClassTestRule;
import org.apache.hadoop.hbase.HBaseConfiguration;
@ -44,19 +45,15 @@ public class TestExcludeDatanodeManager {
StreamSlowMonitor streamSlowDNsMonitor =
excludeDatanodeManager.getStreamSlowMonitor("testMonitor");
assertEquals(0, excludeDatanodeManager.getExcludeDNs().size());
DatanodeInfo datanodeInfo =
new DatanodeInfo.DatanodeInfoBuilder().setIpAddr("0.0.0.0").setHostName("hostname1")
.setDatanodeUuid("uuid1").setXferPort(111).setInfoPort(222).setInfoSecurePort(333)
.setIpcPort(444).setNetworkLocation("location1").build();
streamSlowDNsMonitor
.checkProcessTimeAndSpeed(datanodeInfo, 100000, 5100,
System.currentTimeMillis() - 5100, 0);
streamSlowDNsMonitor
.checkProcessTimeAndSpeed(datanodeInfo, 100000, 5100,
System.currentTimeMillis() - 5100, 0);
streamSlowDNsMonitor
.checkProcessTimeAndSpeed(datanodeInfo, 100000, 5100,
System.currentTimeMillis() - 5100, 0);
DatanodeInfo datanodeInfo = new DatanodeInfo.DatanodeInfoBuilder().setIpAddr("0.0.0.0")
.setHostName("hostname1").setDatanodeUuid("uuid1").setXferPort(111).setInfoPort(222)
.setInfoSecurePort(333).setIpcPort(444).setNetworkLocation("location1").build();
streamSlowDNsMonitor.checkProcessTimeAndSpeed(datanodeInfo, 100000, 5100,
System.currentTimeMillis() - 5100, 0);
streamSlowDNsMonitor.checkProcessTimeAndSpeed(datanodeInfo, 100000, 5100,
System.currentTimeMillis() - 5100, 0);
streamSlowDNsMonitor.checkProcessTimeAndSpeed(datanodeInfo, 100000, 5100,
System.currentTimeMillis() - 5100, 0);
assertEquals(1, excludeDatanodeManager.getExcludeDNs().size());
assertTrue(excludeDatanodeManager.getExcludeDNs().containsKey(datanodeInfo));
}
@ -68,19 +65,15 @@ public class TestExcludeDatanodeManager {
StreamSlowMonitor streamSlowDNsMonitor =
excludeDatanodeManager.getStreamSlowMonitor("testMonitor");
assertEquals(0, excludeDatanodeManager.getExcludeDNs().size());
DatanodeInfo datanodeInfo =
new DatanodeInfo.DatanodeInfoBuilder().setIpAddr("0.0.0.0").setHostName("hostname1")
.setDatanodeUuid("uuid1").setXferPort(111).setInfoPort(222).setInfoSecurePort(333)
.setIpcPort(444).setNetworkLocation("location1").build();
streamSlowDNsMonitor
.checkProcessTimeAndSpeed(datanodeInfo, 5000, 7000,
System.currentTimeMillis() - 7000, 0);
streamSlowDNsMonitor
.checkProcessTimeAndSpeed(datanodeInfo, 5000, 7000,
System.currentTimeMillis() - 7000, 0);
streamSlowDNsMonitor
.checkProcessTimeAndSpeed(datanodeInfo, 5000, 7000,
System.currentTimeMillis() - 7000, 0);
DatanodeInfo datanodeInfo = new DatanodeInfo.DatanodeInfoBuilder().setIpAddr("0.0.0.0")
.setHostName("hostname1").setDatanodeUuid("uuid1").setXferPort(111).setInfoPort(222)
.setInfoSecurePort(333).setIpcPort(444).setNetworkLocation("location1").build();
streamSlowDNsMonitor.checkProcessTimeAndSpeed(datanodeInfo, 5000, 7000,
System.currentTimeMillis() - 7000, 0);
streamSlowDNsMonitor.checkProcessTimeAndSpeed(datanodeInfo, 5000, 7000,
System.currentTimeMillis() - 7000, 0);
streamSlowDNsMonitor.checkProcessTimeAndSpeed(datanodeInfo, 5000, 7000,
System.currentTimeMillis() - 7000, 0);
assertEquals(1, excludeDatanodeManager.getExcludeDNs().size());
assertTrue(excludeDatanodeManager.getExcludeDNs().containsKey(datanodeInfo));
}

View File

@ -1,4 +1,4 @@
/**
/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
@ -57,6 +57,7 @@ import org.junit.experimental.categories.Category;
import org.junit.rules.TestName;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.apache.hbase.thirdparty.io.netty.channel.Channel;
import org.apache.hbase.thirdparty.io.netty.channel.EventLoop;
import org.apache.hbase.thirdparty.io.netty.channel.EventLoopGroup;
@ -240,9 +241,9 @@ public class TestFanOutOneBlockAsyncDFSOutput extends AsyncFSTestBase {
StreamSlowMonitor streamSlowDNsMonitor =
excludeDatanodeManager.getStreamSlowMonitor("testMonitor");
assertEquals(0, excludeDatanodeManager.getExcludeDNs().size());
try (FanOutOneBlockAsyncDFSOutput output = FanOutOneBlockAsyncDFSOutputHelper.createOutput(FS,
f, true, false, (short) 3, FS.getDefaultBlockSize(), eventLoop,
CHANNEL_CLASS, streamSlowDNsMonitor)) {
try (FanOutOneBlockAsyncDFSOutput output =
FanOutOneBlockAsyncDFSOutputHelper.createOutput(FS, f, true, false, (short) 3,
FS.getDefaultBlockSize(), eventLoop, CHANNEL_CLASS, streamSlowDNsMonitor)) {
// should exclude the dead dn when retry so here we only have 2 DNs in pipeline
assertEquals(2, output.getPipeline().length);
assertEquals(1, excludeDatanodeManager.getExcludeDNs().size());

View File

@ -1,4 +1,4 @@
/**
/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
@ -47,6 +47,7 @@ import org.junit.experimental.categories.Category;
import org.junit.rules.TestName;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.apache.hbase.thirdparty.io.netty.buffer.ByteBuf;
import org.apache.hbase.thirdparty.io.netty.channel.Channel;
import org.apache.hbase.thirdparty.io.netty.channel.ChannelHandlerContext;
@ -70,10 +71,10 @@ public class TestFanOutOneBlockAsyncDFSOutputHang extends AsyncFSTestBase {
@ClassRule
public static final HBaseClassTestRule CLASS_RULE =
HBaseClassTestRule.forClass(TestFanOutOneBlockAsyncDFSOutputHang.class);
HBaseClassTestRule.forClass(TestFanOutOneBlockAsyncDFSOutputHang.class);
private static final Logger LOG =
LoggerFactory.getLogger(TestFanOutOneBlockAsyncDFSOutputHang.class);
LoggerFactory.getLogger(TestFanOutOneBlockAsyncDFSOutputHang.class);
private static DistributedFileSystem FS;

View File

@ -1,4 +1,4 @@
/**
/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information

View File

@ -1,4 +1,4 @@
/**
/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information

View File

@ -1,4 +1,4 @@
/**
/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
@ -21,6 +21,7 @@ import static org.apache.hadoop.hdfs.DFSConfigKeys.DFS_CLIENT_SOCKET_TIMEOUT_KEY
import static org.apache.hadoop.hdfs.DFSConfigKeys.DFS_DATA_ENCRYPTION_ALGORITHM_KEY;
import static org.apache.hadoop.hdfs.DFSConfigKeys.DFS_ENCRYPT_DATA_TRANSFER_CIPHER_SUITES_KEY;
import static org.apache.hadoop.hdfs.DFSConfigKeys.DFS_ENCRYPT_DATA_TRANSFER_KEY;
import java.io.File;
import java.io.IOException;
import java.lang.reflect.Method;
@ -62,6 +63,7 @@ import org.junit.runners.Parameterized.Parameter;
import org.junit.runners.Parameterized.Parameters;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.apache.hbase.thirdparty.io.netty.channel.Channel;
import org.apache.hbase.thirdparty.io.netty.channel.EventLoop;
import org.apache.hbase.thirdparty.io.netty.channel.EventLoopGroup;

View File

@ -1,4 +1,4 @@
/**
/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
@ -31,7 +31,7 @@ public class TestSendBufSizePredictor {
@ClassRule
public static final HBaseClassTestRule CLASS_RULE =
HBaseClassTestRule.forClass(TestSendBufSizePredictor.class);
HBaseClassTestRule.forClass(TestSendBufSizePredictor.class);
@Test
public void test() {

View File

@ -110,9 +110,9 @@ public final class HBaseKerberosUtils {
/**
* Set up configuration for a secure HDFS+HBase cluster.
* @param conf configuration object.
* @param conf configuration object.
* @param servicePrincipal service principal used by NN, HM and RS.
* @param spnegoPrincipal SPNEGO principal used by NN web UI.
* @param spnegoPrincipal SPNEGO principal used by NN web UI.
*/
public static void setSecuredConfiguration(Configuration conf, String servicePrincipal,
String spnegoPrincipal) {
@ -156,7 +156,7 @@ public final class HBaseKerberosUtils {
/**
* Set up SSL configuration for HDFS NameNode and DataNode.
* @param utility a HBaseTestingUtility object.
* @param clazz the caller test class.
* @param clazz the caller test class.
* @throws Exception if unable to set up SSL configuration
*/
public static void setSSLConfiguration(HBaseCommonTestingUtility utility, Class<?> clazz)

View File

@ -1,4 +1,4 @@
/**
/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
@ -20,7 +20,6 @@ package org.apache.hadoop.hbase.util;
import static org.junit.Assert.assertTrue;
import java.io.IOException;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.hbase.HBaseClassTestRule;
@ -69,8 +68,8 @@ public class TestRecoverLeaseFSUtils {
Mockito.verify(dfs, Mockito.times(5)).recoverLease(FILE);
// Make sure we waited at least hbase.lease.recovery.dfs.timeout * 3 (the first two
// invocations will happen pretty fast... the we fall into the longer wait loop).
assertTrue((EnvironmentEdgeManager.currentTime() - startTime) > (3 *
HTU.getConfiguration().getInt("hbase.lease.recovery.dfs.timeout", 61000)));
assertTrue((EnvironmentEdgeManager.currentTime() - startTime)
> (3 * HTU.getConfiguration().getInt("hbase.lease.recovery.dfs.timeout", 61000)));
}
/**

View File

@ -1,4 +1,4 @@
<?xml version="1.0"?>
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="https://maven.apache.org/POM/4.0.0" xmlns:xsi="https://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="https://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd">
<!--
/**
@ -21,17 +21,29 @@
-->
<modelVersion>4.0.0</modelVersion>
<parent>
<artifactId>hbase</artifactId>
<groupId>org.apache.hbase</groupId>
<artifactId>hbase</artifactId>
<version>2.5.0-SNAPSHOT</version>
<relativePath>..</relativePath>
</parent>
<artifactId>hbase-build-configuration</artifactId>
<name>Apache HBase - Build Configuration</name>
<description>Configure the build-support artifacts for maven build</description>
<packaging>pom</packaging>
<name>Apache HBase - Build Configuration</name>
<description>Configure the build-support artifacts for maven build</description>
<dependencies>
<dependency>
<groupId>org.apache.hbase</groupId>
<artifactId>hbase-annotations</artifactId>
<type>test-jar</type>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.apache.yetus</groupId>
<artifactId>audience-annotations</artifactId>
</dependency>
</dependencies>
<build>
<plugins>
<plugin>
@ -50,18 +62,6 @@
</plugin>
</plugins>
</build>
<dependencies>
<dependency>
<groupId>org.apache.hbase</groupId>
<artifactId>hbase-annotations</artifactId>
<type>test-jar</type>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.apache.yetus</groupId>
<artifactId>audience-annotations</artifactId>
</dependency>
</dependencies>
<profiles>
<profile>

View File

@ -1,7 +1,5 @@
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="https://maven.apache.org/POM/4.0.0"
xmlns:xsi="https://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="https://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd">
<project xmlns="https://maven.apache.org/POM/4.0.0" xmlns:xsi="https://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="https://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd">
<!--
/**
* Licensed to the Apache Software Foundation (ASF) under one
@ -21,38 +19,38 @@
* limitations under the License.
*/
-->
<modelVersion>4.0.0</modelVersion>
<groupId>org.apache.hbase</groupId>
<artifactId>hbase-checkstyle</artifactId>
<version>2.5.0-SNAPSHOT</version>
<name>Apache HBase - Checkstyle</name>
<description>Module to hold Checkstyle properties for HBase.</description>
<!--REMOVE-->
<modelVersion>4.0.0</modelVersion>
<!--REMOVE-->
<parent>
<artifactId>hbase</artifactId>
<groupId>org.apache.hbase</groupId>
<artifactId>hbase</artifactId>
<version>2.5.0-SNAPSHOT</version>
<relativePath>..</relativePath>
</parent>
<groupId>org.apache.hbase</groupId>
<artifactId>hbase-checkstyle</artifactId>
<version>2.5.0-SNAPSHOT</version>
<name>Apache HBase - Checkstyle</name>
<description>Module to hold Checkstyle properties for HBase.</description>
<build>
<plugins>
<build>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-site-plugin</artifactId>
<configuration>
<skip>true</skip>
</configuration>
</plugin>
<plugin>
<!--Make it so assembly:single does nothing in here-->
<artifactId>maven-assembly-plugin</artifactId>
<configuration>
<skipAssembly>true</skipAssembly>
</configuration>
</plugin>
</plugins>
</build>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-site-plugin</artifactId>
<configuration>
<skip>true</skip>
</configuration>
</plugin>
<plugin>
<!--Make it so assembly:single does nothing in here-->
<artifactId>maven-assembly-plugin</artifactId>
<configuration>
<skipAssembly>true</skipAssembly>
</configuration>
</plugin>
</plugins>
</build>
</project>

View File

@ -1,6 +1,5 @@
<?xml version="1.0"?>
<project xmlns="https://maven.apache.org/POM/4.0.0" xmlns:xsi="https://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="https://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd">
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="https://maven.apache.org/POM/4.0.0" xmlns:xsi="https://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="https://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd">
<!--
/**
* Licensed to the Apache Software Foundation (ASF) under one
@ -22,8 +21,8 @@
-->
<modelVersion>4.0.0</modelVersion>
<parent>
<artifactId>hbase-build-configuration</artifactId>
<groupId>org.apache.hbase</groupId>
<artifactId>hbase-build-configuration</artifactId>
<version>2.5.0-SNAPSHOT</version>
<relativePath>../hbase-build-configuration</relativePath>
</parent>
@ -31,28 +30,6 @@
<artifactId>hbase-client</artifactId>
<name>Apache HBase - Client</name>
<description>Client of HBase</description>
<!--REMOVE-->
<build>
<plugins>
<plugin>
<!--Make it so assembly:single does nothing in here-->
<artifactId>maven-assembly-plugin</artifactId>
<configuration>
<skipAssembly>true</skipAssembly>
</configuration>
</plugin>
<!-- Make a jar and put the sources in the jar -->
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-source-plugin</artifactId>
</plugin>
<plugin>
<groupId>net.revelc.code</groupId>
<artifactId>warbucks-maven-plugin</artifactId>
</plugin>
</plugins>
</build>
<dependencies>
<dependency>
@ -221,6 +198,28 @@
</exclusions>
</dependency>
</dependencies>
<!--REMOVE-->
<build>
<plugins>
<plugin>
<!--Make it so assembly:single does nothing in here-->
<artifactId>maven-assembly-plugin</artifactId>
<configuration>
<skipAssembly>true</skipAssembly>
</configuration>
</plugin>
<!-- Make a jar and put the sources in the jar -->
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-source-plugin</artifactId>
</plugin>
<plugin>
<groupId>net.revelc.code</groupId>
<artifactId>warbucks-maven-plugin</artifactId>
</plugin>
</plugins>
</build>
<profiles>
<!-- Skip the tests in this module -->
@ -242,8 +241,9 @@
<id>hadoop-2.0</id>
<activation>
<property>
<!--Below formatting for dev-support/generate-hadoopX-poms.sh-->
<!--h2--><name>!hadoop.profile</name>
<!--Below formatting for dev-support/generate-hadoopX-poms.sh-->
<!--h2-->
<name>!hadoop.profile</name>
</property>
</activation>
<dependencies>
@ -398,8 +398,7 @@
<artifactId>lifecycle-mapping</artifactId>
<configuration>
<lifecycleMappingMetadata>
<pluginExecutions>
</pluginExecutions>
<pluginExecutions/>
</lifecycleMappingMetadata>
</configuration>
</plugin>

View File

@ -1,5 +1,4 @@
/**
*
/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
@ -23,8 +22,8 @@ import org.apache.yetus.audience.InterfaceAudience;
/**
* Interface to support the aborting of a given server or client.
* <p>
* This is used primarily for ZooKeeper usage when we could get an unexpected
* and fatal exception, requiring an abort.
* This is used primarily for ZooKeeper usage when we could get an unexpected and fatal exception,
* requiring an abort.
* <p>
* Implemented by the Master, RegionServer, and TableServers (client).
*/
@ -33,13 +32,12 @@ public interface Abortable {
/**
* Abort the server or client.
* @param why Why we're aborting.
* @param e Throwable that caused abort. Can be null.
* @param e Throwable that caused abort. Can be null.
*/
void abort(String why, Throwable e);
/**
* It just call another abort method and the Throwable
* parameter is null.
* It just call another abort method and the Throwable parameter is null.
* @param why Why we're aborting.
* @see Abortable#abort(String, Throwable)
*/

View File

@ -1,4 +1,4 @@
/**
/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
@ -63,21 +63,20 @@ public class AsyncMetaTableAccessor {
private static final Logger LOG = LoggerFactory.getLogger(AsyncMetaTableAccessor.class);
/** The delimiter for meta columns for replicaIds &gt; 0 */
private static final char META_REPLICA_ID_DELIMITER = '_';
/** A regex for parsing server columns from meta. See above javadoc for meta layout */
private static final Pattern SERVER_COLUMN_PATTERN = Pattern
.compile("^server(_[0-9a-fA-F]{4})?$");
private static final Pattern SERVER_COLUMN_PATTERN =
Pattern.compile("^server(_[0-9a-fA-F]{4})?$");
public static CompletableFuture<Boolean> tableExists(AsyncTable<?> metaTable,
TableName tableName) {
TableName tableName) {
return getTableState(metaTable, tableName).thenApply(Optional::isPresent);
}
public static CompletableFuture<Optional<TableState>> getTableState(AsyncTable<?> metaTable,
TableName tableName) {
TableName tableName) {
CompletableFuture<Optional<TableState>> future = new CompletableFuture<>();
Get get = new Get(tableName.getName()).addColumn(getTableFamily(), getStateColumn());
long time = EnvironmentEdgeManager.currentTime();
@ -101,13 +100,12 @@ public class AsyncMetaTableAccessor {
}
/**
* Returns the HRegionLocation from meta for the given region
* @param metaTable
* @param regionName region we're looking for
* Returns the HRegionLocation from meta for the given region n * @param regionName region we're
* looking for
* @return HRegionLocation for the given region
*/
public static CompletableFuture<Optional<HRegionLocation>> getRegionLocation(
AsyncTable<?> metaTable, byte[] regionName) {
public static CompletableFuture<Optional<HRegionLocation>>
getRegionLocation(AsyncTable<?> metaTable, byte[] regionName) {
CompletableFuture<Optional<HRegionLocation>> future = new CompletableFuture<>();
try {
RegionInfo parsedRegionInfo = MetaTableAccessor.parseRegionInfoFromRegionName(regionName);
@ -128,13 +126,12 @@ public class AsyncMetaTableAccessor {
}
/**
* Returns the HRegionLocation from meta for the given encoded region name
* @param metaTable
* @param encodedRegionName region we're looking for
* Returns the HRegionLocation from meta for the given encoded region name n * @param
* encodedRegionName region we're looking for
* @return HRegionLocation for the given region
*/
public static CompletableFuture<Optional<HRegionLocation>> getRegionLocationWithEncodedName(
AsyncTable<?> metaTable, byte[] encodedRegionName) {
public static CompletableFuture<Optional<HRegionLocation>>
getRegionLocationWithEncodedName(AsyncTable<?> metaTable, byte[] encodedRegionName) {
CompletableFuture<Optional<HRegionLocation>> future = new CompletableFuture<>();
addListener(
metaTable
@ -149,8 +146,10 @@ public class AsyncMetaTableAccessor {
.filter(result -> MetaTableAccessor.getRegionInfo(result) != null).forEach(result -> {
getRegionLocations(result).ifPresent(locations -> {
for (HRegionLocation location : locations.getRegionLocations()) {
if (location != null &&
encodedRegionNameStr.equals(location.getRegion().getEncodedName())) {
if (
location != null
&& encodedRegionNameStr.equals(location.getRegion().getEncodedName())
) {
future.complete(Optional.of(location));
return;
}
@ -166,24 +165,22 @@ public class AsyncMetaTableAccessor {
Cell cell = r.getColumnLatestCell(getTableFamily(), getStateColumn());
if (cell == null) return Optional.empty();
try {
return Optional.of(TableState.parseFrom(
TableName.valueOf(r.getRow()),
Arrays.copyOfRange(cell.getValueArray(), cell.getValueOffset(), cell.getValueOffset()
+ cell.getValueLength())));
return Optional.of(
TableState.parseFrom(TableName.valueOf(r.getRow()), Arrays.copyOfRange(cell.getValueArray(),
cell.getValueOffset(), cell.getValueOffset() + cell.getValueLength())));
} catch (DeserializationException e) {
throw new IOException("Failed to parse table state from result: " + r, e);
}
}
/**
* Used to get all region locations for the specific table.
* @param metaTable
* @param tableName table we're looking for, can be null for getting all regions
* Used to get all region locations for the specific table. n * @param tableName table we're
* looking for, can be null for getting all regions
* @return the list of region locations. The return value will be wrapped by a
* {@link CompletableFuture}.
*/
public static CompletableFuture<List<HRegionLocation>> getTableHRegionLocations(
AsyncTable<AdvancedScanResultConsumer> metaTable, TableName tableName) {
AsyncTable<AdvancedScanResultConsumer> metaTable, TableName tableName) {
CompletableFuture<List<HRegionLocation>> future = new CompletableFuture<>();
addListener(getTableRegionsAndLocations(metaTable, tableName, true), (locations, err) -> {
if (err != null) {
@ -201,54 +198,53 @@ public class AsyncMetaTableAccessor {
}
/**
* Used to get table regions' info and server.
* @param metaTable
* @param tableName table we're looking for, can be null for getting all regions
* Used to get table regions' info and server. n * @param tableName table we're looking for, can
* be null for getting all regions
* @param excludeOfflinedSplitParents don't return split parents
* @return the list of regioninfos and server. The return value will be wrapped by a
* {@link CompletableFuture}.
*/
private static CompletableFuture<List<Pair<RegionInfo, ServerName>>> getTableRegionsAndLocations(
final AsyncTable<AdvancedScanResultConsumer> metaTable,
final TableName tableName, final boolean excludeOfflinedSplitParents) {
final AsyncTable<AdvancedScanResultConsumer> metaTable, final TableName tableName,
final boolean excludeOfflinedSplitParents) {
CompletableFuture<List<Pair<RegionInfo, ServerName>>> future = new CompletableFuture<>();
if (TableName.META_TABLE_NAME.equals(tableName)) {
future.completeExceptionally(new IOException(
"This method can't be used to locate meta regions;" + " use MetaTableLocator instead"));
"This method can't be used to locate meta regions;" + " use MetaTableLocator instead"));
}
// Make a version of CollectingVisitor that collects RegionInfo and ServerAddress
CollectingVisitor<Pair<RegionInfo, ServerName>> visitor =
new CollectingVisitor<Pair<RegionInfo, ServerName>>() {
private RegionLocations current = null;
private RegionLocations current = null;
@Override
public boolean visit(Result r) throws IOException {
Optional<RegionLocations> currentRegionLocations = getRegionLocations(r);
current = currentRegionLocations.orElse(null);
if (current == null || current.getRegionLocation().getRegion() == null) {
LOG.warn("No serialized RegionInfo in " + r);
return true;
@Override
public boolean visit(Result r) throws IOException {
Optional<RegionLocations> currentRegionLocations = getRegionLocations(r);
current = currentRegionLocations.orElse(null);
if (current == null || current.getRegionLocation().getRegion() == null) {
LOG.warn("No serialized RegionInfo in " + r);
return true;
}
RegionInfo hri = current.getRegionLocation().getRegion();
if (excludeOfflinedSplitParents && hri.isSplitParent()) return true;
// Else call super and add this Result to the collection.
return super.visit(r);
}
RegionInfo hri = current.getRegionLocation().getRegion();
if (excludeOfflinedSplitParents && hri.isSplitParent()) return true;
// Else call super and add this Result to the collection.
return super.visit(r);
}
@Override
void add(Result r) {
if (current == null) {
return;
}
for (HRegionLocation loc : current.getRegionLocations()) {
if (loc != null) {
this.results.add(new Pair<RegionInfo, ServerName>(loc.getRegion(), loc
.getServerName()));
@Override
void add(Result r) {
if (current == null) {
return;
}
for (HRegionLocation loc : current.getRegionLocations()) {
if (loc != null) {
this.results
.add(new Pair<RegionInfo, ServerName>(loc.getRegion(), loc.getServerName()));
}
}
}
}
};
};
addListener(scanMeta(metaTable, tableName, QueryType.REGION, visitor), (v, error) -> {
if (error != null) {
@ -261,29 +257,25 @@ public class AsyncMetaTableAccessor {
}
/**
* Performs a scan of META table for given table.
* @param metaTable
* @param tableName table withing we scan
* @param type scanned part of meta
* Performs a scan of META table for given table. n * @param tableName table withing we scan
* @param type scanned part of meta
* @param visitor Visitor invoked against each row
*/
private static CompletableFuture<Void> scanMeta(AsyncTable<AdvancedScanResultConsumer> metaTable,
TableName tableName, QueryType type, final Visitor visitor) {
TableName tableName, QueryType type, final Visitor visitor) {
return scanMeta(metaTable, getTableStartRowForMeta(tableName, type),
getTableStopRowForMeta(tableName, type), type, Integer.MAX_VALUE, visitor);
}
/**
* Performs a scan of META table for given table.
* @param metaTable
* @param startRow Where to start the scan
* Performs a scan of META table for given table. n * @param startRow Where to start the scan
* @param stopRow Where to stop the scan
* @param type scanned part of meta
* @param type scanned part of meta
* @param maxRows maximum rows to return
* @param visitor Visitor invoked against each row
*/
private static CompletableFuture<Void> scanMeta(AsyncTable<AdvancedScanResultConsumer> metaTable,
byte[] startRow, byte[] stopRow, QueryType type, int maxRows, final Visitor visitor) {
byte[] startRow, byte[] stopRow, QueryType type, int maxRows, final Visitor visitor) {
int rowUpperLimit = maxRows > 0 ? maxRows : Integer.MAX_VALUE;
Scan scan = getMetaScan(metaTable, rowUpperLimit);
for (byte[] family : type.getFamilies()) {
@ -298,8 +290,8 @@ public class AsyncMetaTableAccessor {
if (LOG.isDebugEnabled()) {
LOG.debug("Scanning META" + " starting at row=" + Bytes.toStringBinary(scan.getStartRow())
+ " stopping at row=" + Bytes.toStringBinary(scan.getStopRow()) + " for max="
+ rowUpperLimit + " with caching=" + scan.getCaching());
+ " stopping at row=" + Bytes.toStringBinary(scan.getStopRow()) + " for max="
+ rowUpperLimit + " with caching=" + scan.getCaching());
}
CompletableFuture<Void> future = new CompletableFuture<Void>();
@ -318,7 +310,7 @@ public class AsyncMetaTableAccessor {
private final CompletableFuture<Void> future;
MetaTableScanResultConsumer(int rowUpperLimit, Visitor visitor,
CompletableFuture<Void> future) {
CompletableFuture<Void> future) {
this.rowUpperLimit = rowUpperLimit;
this.visitor = visitor;
this.future = future;
@ -332,7 +324,7 @@ public class AsyncMetaTableAccessor {
@Override
@edu.umd.cs.findbugs.annotations.SuppressWarnings(value = "NP_NONNULL_PARAM_VIOLATION",
justification = "https://github.com/findbugsproject/findbugs/issues/79")
justification = "https://github.com/findbugsproject/findbugs/issues/79")
public void onComplete() {
future.complete(null);
}
@ -366,8 +358,10 @@ public class AsyncMetaTableAccessor {
Scan scan = new Scan();
int scannerCaching = metaTable.getConfiguration().getInt(HConstants.HBASE_META_SCANNER_CACHING,
HConstants.DEFAULT_HBASE_META_SCANNER_CACHING);
if (metaTable.getConfiguration().getBoolean(HConstants.USE_META_REPLICAS,
HConstants.DEFAULT_USE_META_REPLICAS)) {
if (
metaTable.getConfiguration().getBoolean(HConstants.USE_META_REPLICAS,
HConstants.DEFAULT_USE_META_REPLICAS)
) {
scan.setConsistency(Consistency.TIMELINE);
}
if (rowUpperLimit <= scannerCaching) {
@ -423,16 +417,15 @@ public class AsyncMetaTableAccessor {
}
/**
* Returns the HRegionLocation parsed from the given meta row Result
* for the given regionInfo and replicaId. The regionInfo can be the default region info
* for the replica.
* @param r the meta row result
* Returns the HRegionLocation parsed from the given meta row Result for the given regionInfo and
* replicaId. The regionInfo can be the default region info for the replica.
* @param r the meta row result
* @param regionInfo RegionInfo for default replica
* @param replicaId the replicaId for the HRegionLocation
* @param replicaId the replicaId for the HRegionLocation
* @return HRegionLocation parsed from the given meta row Result for the given replicaId
*/
private static HRegionLocation getRegionLocation(final Result r, final RegionInfo regionInfo,
final int replicaId) {
final int replicaId) {
Optional<ServerName> serverName = getServerName(r, replicaId);
long seqNum = getSeqNumDuringOpen(r, replicaId);
RegionInfo replicaInfo = RegionReplicaUtil.getRegionInfoForReplica(regionInfo, replicaId);
@ -448,8 +441,8 @@ public class AsyncMetaTableAccessor {
byte[] serverColumn = getServerColumn(replicaId);
Cell cell = r.getColumnLatestCell(getCatalogFamily(), serverColumn);
if (cell == null || cell.getValueLength() == 0) return Optional.empty();
String hostAndPort = Bytes.toString(cell.getValueArray(), cell.getValueOffset(),
cell.getValueLength());
String hostAndPort =
Bytes.toString(cell.getValueArray(), cell.getValueOffset(), cell.getValueLength());
byte[] startcodeColumn = getStartCodeColumn(replicaId);
cell = r.getColumnLatestCell(getCatalogFamily(), startcodeColumn);
if (cell == null || cell.getValueLength() == 0) return Optional.empty();
@ -463,8 +456,8 @@ public class AsyncMetaTableAccessor {
}
/**
* The latest seqnum that the server writing to meta observed when opening the region.
* E.g. the seqNum when the result of {@link #getServerName(Result, int)} was written.
* The latest seqnum that the server writing to meta observed when opening the region. E.g. the
* seqNum when the result of {@link #getServerName(Result, int)} was written.
* @param r Result to pull the seqNum from
* @return SeqNum, or HConstants.NO_SEQNUM if there's no value written.
*/
@ -533,7 +526,7 @@ public class AsyncMetaTableAccessor {
/**
* Returns the RegionInfo object from the column {@link HConstants#CATALOG_FAMILY} and
* <code>qualifier</code> of the catalog table result.
* @param r a Result object from the catalog table scan
* @param r a Result object from the catalog table scan
* @param qualifier Column family qualifier
* @return An RegionInfo instance.
*/
@ -585,7 +578,7 @@ public class AsyncMetaTableAccessor {
return replicaId == 0
? HConstants.SERVER_QUALIFIER
: Bytes.toBytes(HConstants.SERVER_QUALIFIER_STR + META_REPLICA_ID_DELIMITER
+ String.format(RegionInfo.REPLICA_ID_FORMAT, replicaId));
+ String.format(RegionInfo.REPLICA_ID_FORMAT, replicaId));
}
/**
@ -597,7 +590,7 @@ public class AsyncMetaTableAccessor {
return replicaId == 0
? HConstants.STARTCODE_QUALIFIER
: Bytes.toBytes(HConstants.STARTCODE_QUALIFIER_STR + META_REPLICA_ID_DELIMITER
+ String.format(RegionInfo.REPLICA_ID_FORMAT, replicaId));
+ String.format(RegionInfo.REPLICA_ID_FORMAT, replicaId));
}
/**
@ -609,12 +602,12 @@ public class AsyncMetaTableAccessor {
return replicaId == 0
? HConstants.SEQNUM_QUALIFIER
: Bytes.toBytes(HConstants.SEQNUM_QUALIFIER_STR + META_REPLICA_ID_DELIMITER
+ String.format(RegionInfo.REPLICA_ID_FORMAT, replicaId));
+ String.format(RegionInfo.REPLICA_ID_FORMAT, replicaId));
}
/**
* Parses the replicaId from the server column qualifier. See top of the class javadoc
* for the actual meta layout
* Parses the replicaId from the server column qualifier. See top of the class javadoc for the
* actual meta layout
* @param serverColumn the column qualifier
* @return an int for the replicaId
*/

View File

@ -1,5 +1,4 @@
/**
*
/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
@ -21,7 +20,6 @@ package org.apache.hadoop.hbase;
import java.util.Collections;
import java.util.Map;
import java.util.stream.Collectors;
import org.apache.hadoop.hbase.client.RegionInfo;
import org.apache.yetus.audience.InterfaceAudience;
@ -56,9 +54,8 @@ public final class CacheEvictionStats {
private String getFailedRegions() {
return exceptions.keySet().stream()
.map(regionName -> RegionInfo.prettyPrint(RegionInfo.encodeRegionName(regionName)))
.collect(Collectors.toList())
.toString();
.map(regionName -> RegionInfo.prettyPrint(RegionInfo.encodeRegionName(regionName)))
.collect(Collectors.toList()).toString();
}
@InterfaceAudience.Private
@ -68,11 +65,8 @@ public final class CacheEvictionStats {
@Override
public String toString() {
return "CacheEvictionStats{" +
"evictedBlocks=" + evictedBlocks +
", maxCacheSize=" + maxCacheSize +
", failedRegionsSize=" + getExceptionCount() +
", failedRegions=" + getFailedRegions() +
'}';
return "CacheEvictionStats{" + "evictedBlocks=" + evictedBlocks + ", maxCacheSize="
+ maxCacheSize + ", failedRegionsSize=" + getExceptionCount() + ", failedRegions="
+ getFailedRegions() + '}';
}
}

View File

@ -1,5 +1,4 @@
/**
*
/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
@ -39,4 +38,4 @@ public class CacheEvictionStatsAggregator {
public synchronized CacheEvictionStats sum() {
return this.builder.build();
}
}
}

View File

@ -1,5 +1,4 @@
/**
*
/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
@ -20,7 +19,6 @@ package org.apache.hadoop.hbase;
import java.util.HashMap;
import java.util.Map;
import org.apache.yetus.audience.InterfaceAudience;
@InterfaceAudience.Private
@ -42,7 +40,7 @@ public final class CacheEvictionStatsBuilder {
return this;
}
public void addException(byte[] regionName, Throwable ie){
public void addException(byte[] regionName, Throwable ie) {
exceptions.put(regionName, ie);
}

View File

@ -1,4 +1,4 @@
/**
/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
@ -15,14 +15,13 @@
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.hadoop.hbase;
import org.apache.yetus.audience.InterfaceAudience;
/**
* Returned to the clients when their request was discarded due to server being overloaded.
* Clients should retry upon receiving it.
* Returned to the clients when their request was discarded due to server being overloaded. Clients
* should retry upon receiving it.
*/
@SuppressWarnings("serial")
@InterfaceAudience.Public

View File

@ -1,4 +1,4 @@
/**
/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
@ -15,14 +15,13 @@
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.hadoop.hbase;
import org.apache.yetus.audience.InterfaceAudience;
/**
* Returned to clients when their request was dropped because the call queue was too big to
* accept a new call. Clients should retry upon receiving it.
* Returned to clients when their request was dropped because the call queue was too big to accept a
* new call. Clients should retry upon receiving it.
*/
@SuppressWarnings("serial")
@InterfaceAudience.Public

View File

@ -1,5 +1,4 @@
/**
*
/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
@ -19,12 +18,10 @@
package org.apache.hadoop.hbase;
import java.io.IOException;
import org.apache.yetus.audience.InterfaceAudience;
/**
* This exception is thrown by the master when a region server clock skew is
* too high.
* This exception is thrown by the master when a region server clock skew is too high.
*/
@SuppressWarnings("serial")
@InterfaceAudience.Public

View File

@ -15,29 +15,27 @@
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.hadoop.hbase;
import java.io.IOException;
import java.util.UUID;
import org.apache.yetus.audience.InterfaceAudience;
import org.apache.hadoop.hbase.exceptions.DeserializationException;
import org.apache.hadoop.hbase.util.Bytes;
import org.apache.yetus.audience.InterfaceAudience;
import org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil;
import org.apache.hadoop.hbase.shaded.protobuf.generated.ClusterIdProtos;
import org.apache.hadoop.hbase.util.Bytes;
/**
* The identifier for this cluster.
* It is serialized to the filesystem and up into zookeeper. This is a container for the id.
* Also knows how to serialize and deserialize the cluster id.
* The identifier for this cluster. It is serialized to the filesystem and up into zookeeper. This
* is a container for the id. Also knows how to serialize and deserialize the cluster id.
*/
@InterfaceAudience.Private
public class ClusterId {
private final String id;
/**
* New ClusterID. Generates a uniqueid.
* New ClusterID. Generates a uniqueid.
*/
public ClusterId() {
this(UUID.randomUUID().toString());
@ -50,17 +48,15 @@ public class ClusterId {
/**
* @return The clusterid serialized using pb w/ pb magic prefix
*/
public byte [] toByteArray() {
public byte[] toByteArray() {
return ProtobufUtil.prependPBMagic(convert().toByteArray());
}
/**
* @param bytes A pb serialized {@link ClusterId} instance with pb magic prefix
* @return An instance of {@link ClusterId} made from <code>bytes</code>
* @throws DeserializationException
* @see #toByteArray()
* @return An instance of {@link ClusterId} made from <code>bytes</code> n * @see #toByteArray()
*/
public static ClusterId parseFrom(final byte [] bytes) throws DeserializationException {
public static ClusterId parseFrom(final byte[] bytes) throws DeserializationException {
if (ProtobufUtil.isPBMagicPrefix(bytes)) {
int pblen = ProtobufUtil.lengthOfPBMagic();
ClusterIdProtos.ClusterId.Builder builder = ClusterIdProtos.ClusterId.newBuilder();
@ -87,8 +83,7 @@ public class ClusterId {
}
/**
* @param cid
* @return A {@link ClusterId} made from the passed in <code>cid</code>
* n * @return A {@link ClusterId} made from the passed in <code>cid</code>
*/
public static ClusterId convert(final ClusterIdProtos.ClusterId cid) {
return new ClusterId(cid.getClusterId());

View File

@ -1,5 +1,4 @@
/**
*
/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
@ -16,7 +15,6 @@
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.hadoop.hbase;
import edu.umd.cs.findbugs.annotations.Nullable;
@ -39,28 +37,32 @@ import org.apache.yetus.audience.InterfaceAudience;
* <li>The average cluster load.</li>
* <li>The number of regions deployed on the cluster.</li>
* <li>The number of requests since last report.</li>
* <li>Detailed region server loading and resource usage information,
* per server and per region.</li>
* <li>Detailed region server loading and resource usage information, per server and per
* region.</li>
* <li>Regions in transition at master</li>
* <li>The unique cluster ID</li>
* </ul>
* <tt>{@link Option}</tt> provides a way to get desired ClusterStatus information.
* The following codes will get all the cluster information.
* <tt>{@link Option}</tt> provides a way to get desired ClusterStatus information. The following
* codes will get all the cluster information.
*
* <pre>
* {@code
* // Original version still works
* Admin admin = connection.getAdmin();
* ClusterMetrics metrics = admin.getClusterStatus();
* // or below, a new version which has the same effects
* ClusterMetrics metrics = admin.getClusterStatus(EnumSet.allOf(Option.class));
* {
* &#64;code
* // Original version still works
* Admin admin = connection.getAdmin();
* ClusterMetrics metrics = admin.getClusterStatus();
* // or below, a new version which has the same effects
* ClusterMetrics metrics = admin.getClusterStatus(EnumSet.allOf(Option.class));
* }
* </pre>
* If information about live servers is the only wanted.
* then codes in the following way:
*
* If information about live servers is the only wanted. then codes in the following way:
*
* <pre>
* {@code
* Admin admin = connection.getAdmin();
* ClusterMetrics metrics = admin.getClusterStatus(EnumSet.of(Option.LIVE_SERVERS));
* {
* &#64;code
* Admin admin = connection.getAdmin();
* ClusterMetrics metrics = admin.getClusterStatus(EnumSet.of(Option.LIVE_SERVERS));
* }
* </pre>
*/
@ -88,7 +90,7 @@ public interface ClusterMetrics {
*/
default int getRegionCount() {
return getLiveServerMetrics().entrySet().stream()
.mapToInt(v -> v.getValue().getRegionMetrics().size()).sum();
.mapToInt(v -> v.getValue().getRegionMetrics().size()).sum();
}
/**
@ -96,8 +98,8 @@ public interface ClusterMetrics {
*/
default long getRequestCount() {
return getLiveServerMetrics().entrySet().stream()
.flatMap(v -> v.getValue().getRegionMetrics().values().stream())
.mapToLong(RegionMetrics::getRequestCount).sum();
.flatMap(v -> v.getValue().getRegionMetrics().values().stream())
.mapToLong(RegionMetrics::getRequestCount).sum();
}
/**
@ -122,17 +124,15 @@ public interface ClusterMetrics {
default long getLastMajorCompactionTimestamp(TableName table) {
return getLiveServerMetrics().values().stream()
.flatMap(s -> s.getRegionMetrics().values().stream())
.filter(r -> RegionInfo.getTable(r.getRegionName()).equals(table))
.mapToLong(RegionMetrics::getLastMajorCompactionTimestamp).min().orElse(0);
.flatMap(s -> s.getRegionMetrics().values().stream())
.filter(r -> RegionInfo.getTable(r.getRegionName()).equals(table))
.mapToLong(RegionMetrics::getLastMajorCompactionTimestamp).min().orElse(0);
}
default long getLastMajorCompactionTimestamp(byte[] regionName) {
return getLiveServerMetrics().values().stream()
.filter(s -> s.getRegionMetrics().containsKey(regionName))
.findAny()
.map(s -> s.getRegionMetrics().get(regionName).getLastMajorCompactionTimestamp())
.orElse(0L);
.filter(s -> s.getRegionMetrics().containsKey(regionName)).findAny()
.map(s -> s.getRegionMetrics().get(regionName).getLastMajorCompactionTimestamp()).orElse(0L);
}
@Nullable
@ -150,13 +150,12 @@ public interface ClusterMetrics {
if (serverSize == 0) {
return 0;
}
return (double)getRegionCount() / (double)serverSize;
return (double) getRegionCount() / (double) serverSize;
}
/**
* Provide region states count for given table.
* e.g howmany regions of give table are opened/closed/rit etc
*
* Provide region states count for given table. e.g howmany regions of give table are
* opened/closed/rit etc
* @return map of table to region states count
*/
Map<TableName, RegionStatesCount> getTableRegionStatesCount();

View File

@ -1,5 +1,4 @@
/**
*
/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
@ -16,7 +15,6 @@
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.hadoop.hbase;
import edu.umd.cs.findbugs.annotations.Nullable;
@ -26,13 +24,13 @@ import java.util.List;
import java.util.Map;
import java.util.TreeMap;
import java.util.stream.Collectors;
import org.apache.hadoop.hbase.client.RegionStatesCount;
import org.apache.hadoop.hbase.master.RegionState;
import org.apache.yetus.audience.InterfaceAudience;
import org.apache.hbase.thirdparty.com.google.common.base.Preconditions;
import org.apache.hbase.thirdparty.com.google.protobuf.UnsafeByteOperations;
import org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil;
import org.apache.hadoop.hbase.shaded.protobuf.generated.ClusterStatusProtos;
import org.apache.hadoop.hbase.shaded.protobuf.generated.ClusterStatusProtos.Option;
@ -43,42 +41,34 @@ import org.apache.hadoop.hbase.shaded.protobuf.generated.HBaseProtos;
public final class ClusterMetricsBuilder {
public static ClusterStatusProtos.ClusterStatus toClusterStatus(ClusterMetrics metrics) {
ClusterStatusProtos.ClusterStatus.Builder builder
= ClusterStatusProtos.ClusterStatus.newBuilder()
.addAllBackupMasters(metrics.getBackupMasterNames().stream()
.map(ProtobufUtil::toServerName).collect(Collectors.toList()))
.addAllDeadServers(metrics.getDeadServerNames().stream()
.map(ProtobufUtil::toServerName).collect(Collectors.toList()))
ClusterStatusProtos.ClusterStatus.Builder builder =
ClusterStatusProtos.ClusterStatus.newBuilder()
.addAllBackupMasters(metrics.getBackupMasterNames().stream().map(ProtobufUtil::toServerName)
.collect(Collectors.toList()))
.addAllDeadServers(metrics.getDeadServerNames().stream().map(ProtobufUtil::toServerName)
.collect(Collectors.toList()))
.addAllLiveServers(metrics.getLiveServerMetrics().entrySet().stream()
.map(s -> ClusterStatusProtos.LiveServerInfo
.newBuilder()
.setServer(ProtobufUtil.toServerName(s.getKey()))
.setServerLoad(ServerMetricsBuilder.toServerLoad(s.getValue()))
.build())
.collect(Collectors.toList()))
.map(s -> ClusterStatusProtos.LiveServerInfo.newBuilder()
.setServer(ProtobufUtil.toServerName(s.getKey()))
.setServerLoad(ServerMetricsBuilder.toServerLoad(s.getValue())).build())
.collect(Collectors.toList()))
.addAllMasterCoprocessors(metrics.getMasterCoprocessorNames().stream()
.map(n -> HBaseProtos.Coprocessor.newBuilder().setName(n).build())
.collect(Collectors.toList()))
.map(n -> HBaseProtos.Coprocessor.newBuilder().setName(n).build())
.collect(Collectors.toList()))
.addAllRegionsInTransition(metrics.getRegionStatesInTransition().stream()
.map(r -> ClusterStatusProtos.RegionInTransition
.newBuilder()
.setSpec(HBaseProtos.RegionSpecifier
.newBuilder()
.setType(HBaseProtos.RegionSpecifier.RegionSpecifierType.REGION_NAME)
.setValue(UnsafeByteOperations.unsafeWrap(r.getRegion().getRegionName()))
.build())
.setRegionState(r.convert())
.build())
.collect(Collectors.toList()))
.map(r -> ClusterStatusProtos.RegionInTransition.newBuilder()
.setSpec(HBaseProtos.RegionSpecifier.newBuilder()
.setType(HBaseProtos.RegionSpecifier.RegionSpecifierType.REGION_NAME)
.setValue(UnsafeByteOperations.unsafeWrap(r.getRegion().getRegionName())).build())
.setRegionState(r.convert()).build())
.collect(Collectors.toList()))
.setMasterInfoPort(metrics.getMasterInfoPort())
.addAllServersName(metrics.getServersName().stream().map(ProtobufUtil::toServerName)
.collect(Collectors.toList()))
.addAllTableRegionStatesCount(metrics.getTableRegionStatesCount().entrySet().stream()
.map(status ->
ClusterStatusProtos.TableRegionStatesCount.newBuilder()
.setTableName(ProtobufUtil.toProtoTableName((status.getKey())))
.setRegionStatesCount(ProtobufUtil.toTableRegionStatesCount(status.getValue()))
.build())
.map(status -> ClusterStatusProtos.TableRegionStatesCount.newBuilder()
.setTableName(ProtobufUtil.toProtoTableName((status.getKey())))
.setRegionStatesCount(ProtobufUtil.toTableRegionStatesCount(status.getValue())).build())
.collect(Collectors.toList()));
if (metrics.getMasterName() != null) {
builder.setMaster(ProtobufUtil.toServerName((metrics.getMasterName())));
@ -95,40 +85,33 @@ public final class ClusterMetricsBuilder {
}
if (metrics.getHBaseVersion() != null) {
builder.setHbaseVersion(
FSProtos.HBaseVersionFileContent.newBuilder()
.setVersion(metrics.getHBaseVersion()));
FSProtos.HBaseVersionFileContent.newBuilder().setVersion(metrics.getHBaseVersion()));
}
return builder.build();
}
public static ClusterMetrics toClusterMetrics(
ClusterStatusProtos.ClusterStatus proto) {
public static ClusterMetrics toClusterMetrics(ClusterStatusProtos.ClusterStatus proto) {
ClusterMetricsBuilder builder = ClusterMetricsBuilder.newBuilder();
builder.setLiveServerMetrics(proto.getLiveServersList().stream()
builder
.setLiveServerMetrics(proto.getLiveServersList().stream()
.collect(Collectors.toMap(e -> ProtobufUtil.toServerName(e.getServer()),
ServerMetricsBuilder::toServerMetrics)))
.setDeadServerNames(proto.getDeadServersList().stream()
.map(ProtobufUtil::toServerName)
.collect(Collectors.toList()))
.setBackerMasterNames(proto.getBackupMastersList().stream()
.map(ProtobufUtil::toServerName)
.collect(Collectors.toList()))
.setRegionsInTransition(proto.getRegionsInTransitionList().stream()
.map(ClusterStatusProtos.RegionInTransition::getRegionState)
.map(RegionState::convert)
.collect(Collectors.toList()))
.setMasterCoprocessorNames(proto.getMasterCoprocessorsList().stream()
.map(HBaseProtos.Coprocessor::getName)
.collect(Collectors.toList()))
.setServerNames(proto.getServersNameList().stream().map(ProtobufUtil::toServerName)
.collect(Collectors.toList()))
.setTableRegionStatesCount(
proto.getTableRegionStatesCountList().stream()
.collect(Collectors.toMap(
e -> ProtobufUtil.toTableName(e.getTableName()),
e -> ProtobufUtil.toTableRegionStatesCount(e.getRegionStatesCount()))))
.setMasterTasks(proto.getMasterTasksList().stream()
.map(t -> ProtobufUtil.getServerTask(t)).collect(Collectors.toList()));
ServerMetricsBuilder::toServerMetrics)))
.setDeadServerNames(proto.getDeadServersList().stream().map(ProtobufUtil::toServerName)
.collect(Collectors.toList()))
.setBackerMasterNames(proto.getBackupMastersList().stream().map(ProtobufUtil::toServerName)
.collect(Collectors.toList()))
.setRegionsInTransition(proto.getRegionsInTransitionList().stream()
.map(ClusterStatusProtos.RegionInTransition::getRegionState).map(RegionState::convert)
.collect(Collectors.toList()))
.setMasterCoprocessorNames(proto.getMasterCoprocessorsList().stream()
.map(HBaseProtos.Coprocessor::getName).collect(Collectors.toList()))
.setServerNames(proto.getServersNameList().stream().map(ProtobufUtil::toServerName)
.collect(Collectors.toList()))
.setTableRegionStatesCount(proto.getTableRegionStatesCountList().stream()
.collect(Collectors.toMap(e -> ProtobufUtil.toTableName(e.getTableName()),
e -> ProtobufUtil.toTableRegionStatesCount(e.getRegionStatesCount()))))
.setMasterTasks(proto.getMasterTasksList().stream().map(t -> ProtobufUtil.getServerTask(t))
.collect(Collectors.toList()));
if (proto.hasClusterId()) {
builder.setClusterId(ClusterId.convert(proto.getClusterId()).toString());
}
@ -158,21 +141,35 @@ public final class ClusterMetricsBuilder {
*/
public static ClusterMetrics.Option toOption(ClusterStatusProtos.Option option) {
switch (option) {
case HBASE_VERSION: return ClusterMetrics.Option.HBASE_VERSION;
case LIVE_SERVERS: return ClusterMetrics.Option.LIVE_SERVERS;
case DEAD_SERVERS: return ClusterMetrics.Option.DEAD_SERVERS;
case REGIONS_IN_TRANSITION: return ClusterMetrics.Option.REGIONS_IN_TRANSITION;
case CLUSTER_ID: return ClusterMetrics.Option.CLUSTER_ID;
case MASTER_COPROCESSORS: return ClusterMetrics.Option.MASTER_COPROCESSORS;
case MASTER: return ClusterMetrics.Option.MASTER;
case BACKUP_MASTERS: return ClusterMetrics.Option.BACKUP_MASTERS;
case BALANCER_ON: return ClusterMetrics.Option.BALANCER_ON;
case SERVERS_NAME: return ClusterMetrics.Option.SERVERS_NAME;
case MASTER_INFO_PORT: return ClusterMetrics.Option.MASTER_INFO_PORT;
case TABLE_TO_REGIONS_COUNT: return ClusterMetrics.Option.TABLE_TO_REGIONS_COUNT;
case TASKS: return ClusterMetrics.Option.TASKS;
case HBASE_VERSION:
return ClusterMetrics.Option.HBASE_VERSION;
case LIVE_SERVERS:
return ClusterMetrics.Option.LIVE_SERVERS;
case DEAD_SERVERS:
return ClusterMetrics.Option.DEAD_SERVERS;
case REGIONS_IN_TRANSITION:
return ClusterMetrics.Option.REGIONS_IN_TRANSITION;
case CLUSTER_ID:
return ClusterMetrics.Option.CLUSTER_ID;
case MASTER_COPROCESSORS:
return ClusterMetrics.Option.MASTER_COPROCESSORS;
case MASTER:
return ClusterMetrics.Option.MASTER;
case BACKUP_MASTERS:
return ClusterMetrics.Option.BACKUP_MASTERS;
case BALANCER_ON:
return ClusterMetrics.Option.BALANCER_ON;
case SERVERS_NAME:
return ClusterMetrics.Option.SERVERS_NAME;
case MASTER_INFO_PORT:
return ClusterMetrics.Option.MASTER_INFO_PORT;
case TABLE_TO_REGIONS_COUNT:
return ClusterMetrics.Option.TABLE_TO_REGIONS_COUNT;
case TASKS:
return ClusterMetrics.Option.TASKS;
// should not reach here
default: throw new IllegalArgumentException("Invalid option: " + option);
default:
throw new IllegalArgumentException("Invalid option: " + option);
}
}
@ -183,21 +180,35 @@ public final class ClusterMetricsBuilder {
*/
public static ClusterStatusProtos.Option toOption(ClusterMetrics.Option option) {
switch (option) {
case HBASE_VERSION: return ClusterStatusProtos.Option.HBASE_VERSION;
case LIVE_SERVERS: return ClusterStatusProtos.Option.LIVE_SERVERS;
case DEAD_SERVERS: return ClusterStatusProtos.Option.DEAD_SERVERS;
case REGIONS_IN_TRANSITION: return ClusterStatusProtos.Option.REGIONS_IN_TRANSITION;
case CLUSTER_ID: return ClusterStatusProtos.Option.CLUSTER_ID;
case MASTER_COPROCESSORS: return ClusterStatusProtos.Option.MASTER_COPROCESSORS;
case MASTER: return ClusterStatusProtos.Option.MASTER;
case BACKUP_MASTERS: return ClusterStatusProtos.Option.BACKUP_MASTERS;
case BALANCER_ON: return ClusterStatusProtos.Option.BALANCER_ON;
case SERVERS_NAME: return Option.SERVERS_NAME;
case MASTER_INFO_PORT: return ClusterStatusProtos.Option.MASTER_INFO_PORT;
case TABLE_TO_REGIONS_COUNT: return ClusterStatusProtos.Option.TABLE_TO_REGIONS_COUNT;
case TASKS: return ClusterStatusProtos.Option.TASKS;
case HBASE_VERSION:
return ClusterStatusProtos.Option.HBASE_VERSION;
case LIVE_SERVERS:
return ClusterStatusProtos.Option.LIVE_SERVERS;
case DEAD_SERVERS:
return ClusterStatusProtos.Option.DEAD_SERVERS;
case REGIONS_IN_TRANSITION:
return ClusterStatusProtos.Option.REGIONS_IN_TRANSITION;
case CLUSTER_ID:
return ClusterStatusProtos.Option.CLUSTER_ID;
case MASTER_COPROCESSORS:
return ClusterStatusProtos.Option.MASTER_COPROCESSORS;
case MASTER:
return ClusterStatusProtos.Option.MASTER;
case BACKUP_MASTERS:
return ClusterStatusProtos.Option.BACKUP_MASTERS;
case BALANCER_ON:
return ClusterStatusProtos.Option.BALANCER_ON;
case SERVERS_NAME:
return Option.SERVERS_NAME;
case MASTER_INFO_PORT:
return ClusterStatusProtos.Option.MASTER_INFO_PORT;
case TABLE_TO_REGIONS_COUNT:
return ClusterStatusProtos.Option.TABLE_TO_REGIONS_COUNT;
case TASKS:
return ClusterStatusProtos.Option.TASKS;
// should not reach here
default: throw new IllegalArgumentException("Invalid option: " + option);
default:
throw new IllegalArgumentException("Invalid option: " + option);
}
}
@ -208,7 +219,7 @@ public final class ClusterMetricsBuilder {
*/
public static EnumSet<ClusterMetrics.Option> toOptions(List<ClusterStatusProtos.Option> options) {
return options.stream().map(ClusterMetricsBuilder::toOption)
.collect(Collectors.toCollection(() -> EnumSet.noneOf(ClusterMetrics.Option.class)));
.collect(Collectors.toCollection(() -> EnumSet.noneOf(ClusterMetrics.Option.class)));
}
/**
@ -223,6 +234,7 @@ public final class ClusterMetricsBuilder {
public static ClusterMetricsBuilder newBuilder() {
return new ClusterMetricsBuilder();
}
@Nullable
private String hbaseVersion;
private List<ServerName> deadServerNames = Collections.emptyList();
@ -244,10 +256,12 @@ public final class ClusterMetricsBuilder {
private ClusterMetricsBuilder() {
}
public ClusterMetricsBuilder setHBaseVersion(String value) {
this.hbaseVersion = value;
return this;
}
public ClusterMetricsBuilder setDeadServerNames(List<ServerName> value) {
this.deadServerNames = value;
return this;
@ -262,62 +276,59 @@ public final class ClusterMetricsBuilder {
this.masterName = value;
return this;
}
public ClusterMetricsBuilder setBackerMasterNames(List<ServerName> value) {
this.backupMasterNames = value;
return this;
}
public ClusterMetricsBuilder setRegionsInTransition(List<RegionState> value) {
this.regionsInTransition = value;
return this;
}
public ClusterMetricsBuilder setClusterId(String value) {
this.clusterId = value;
return this;
}
public ClusterMetricsBuilder setMasterCoprocessorNames(List<String> value) {
this.masterCoprocessorNames = value;
return this;
}
public ClusterMetricsBuilder setBalancerOn(@Nullable Boolean value) {
this.balancerOn = value;
return this;
}
public ClusterMetricsBuilder setMasterInfoPort(int value) {
this.masterInfoPort = value;
return this;
}
public ClusterMetricsBuilder setServerNames(List<ServerName> serversName) {
this.serversName = serversName;
return this;
}
public ClusterMetricsBuilder setMasterTasks(List<ServerTask> masterTasks) {
this.masterTasks = masterTasks;
return this;
}
public ClusterMetricsBuilder setTableRegionStatesCount(
Map<TableName, RegionStatesCount> tableRegionStatesCount) {
public ClusterMetricsBuilder
setTableRegionStatesCount(Map<TableName, RegionStatesCount> tableRegionStatesCount) {
this.tableRegionStatesCount = tableRegionStatesCount;
return this;
}
public ClusterMetrics build() {
return new ClusterMetricsImpl(
hbaseVersion,
deadServerNames,
liveServerMetrics,
masterName,
backupMasterNames,
regionsInTransition,
clusterId,
masterCoprocessorNames,
balancerOn,
masterInfoPort,
serversName,
tableRegionStatesCount,
masterTasks
);
return new ClusterMetricsImpl(hbaseVersion, deadServerNames, liveServerMetrics, masterName,
backupMasterNames, regionsInTransition, clusterId, masterCoprocessorNames, balancerOn,
masterInfoPort, serversName, tableRegionStatesCount, masterTasks);
}
private static class ClusterMetricsImpl implements ClusterMetrics {
@Nullable
private final String hbaseVersion;
@ -338,17 +349,11 @@ public final class ClusterMetricsBuilder {
private final List<ServerTask> masterTasks;
ClusterMetricsImpl(String hbaseVersion, List<ServerName> deadServerNames,
Map<ServerName, ServerMetrics> liveServerMetrics,
ServerName masterName,
List<ServerName> backupMasterNames,
List<RegionState> regionsInTransition,
String clusterId,
List<String> masterCoprocessorNames,
Boolean balancerOn,
int masterInfoPort,
List<ServerName> serversName,
Map<TableName, RegionStatesCount> tableRegionStatesCount,
List<ServerTask> masterTasks) {
Map<ServerName, ServerMetrics> liveServerMetrics, ServerName masterName,
List<ServerName> backupMasterNames, List<RegionState> regionsInTransition, String clusterId,
List<String> masterCoprocessorNames, Boolean balancerOn, int masterInfoPort,
List<ServerName> serversName, Map<TableName, RegionStatesCount> tableRegionStatesCount,
List<ServerTask> masterTasks) {
this.hbaseVersion = hbaseVersion;
this.deadServerNames = Preconditions.checkNotNull(deadServerNames);
this.liveServerMetrics = Preconditions.checkNotNull(liveServerMetrics);
@ -437,15 +442,15 @@ public final class ClusterMetricsBuilder {
int backupMastersSize = getBackupMasterNames().size();
sb.append("\nNumber of backup masters: " + backupMastersSize);
if (backupMastersSize > 0) {
for (ServerName serverName: getBackupMasterNames()) {
for (ServerName serverName : getBackupMasterNames()) {
sb.append("\n " + serverName);
}
}
int serversSize = getLiveServerMetrics().size();
int serversNameSize = getServersName().size();
sb.append("\nNumber of live region servers: "
+ (serversSize > 0 ? serversSize : serversNameSize));
sb.append(
"\nNumber of live region servers: " + (serversSize > 0 ? serversSize : serversNameSize));
if (serversSize > 0) {
for (ServerName serverName : getLiveServerMetrics().keySet()) {
sb.append("\n " + serverName.getServerName());

View File

@ -1,5 +1,4 @@
/**
*
/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
@ -16,7 +15,6 @@
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.hadoop.hbase;
import edu.umd.cs.findbugs.annotations.Nullable;
@ -26,7 +24,6 @@ import java.util.Collection;
import java.util.List;
import java.util.Map;
import java.util.stream.Collectors;
import org.apache.hadoop.hbase.client.RegionStatesCount;
import org.apache.hadoop.hbase.master.RegionState;
import org.apache.yetus.audience.InterfaceAudience;
@ -45,32 +42,37 @@ import org.apache.hbase.thirdparty.com.google.common.base.Objects;
* <li>The average cluster load.</li>
* <li>The number of regions deployed on the cluster.</li>
* <li>The number of requests since last report.</li>
* <li>Detailed region server loading and resource usage information,
* per server and per region.</li>
* <li>Detailed region server loading and resource usage information, per server and per
* region.</li>
* <li>Regions in transition at master</li>
* <li>The unique cluster ID</li>
* </ul>
* <tt>{@link ClusterMetrics.Option}</tt> provides a way to get desired ClusterStatus information.
* The following codes will get all the cluster information.
*
* <pre>
* {@code
* // Original version still works
* Admin admin = connection.getAdmin();
* ClusterStatus status = admin.getClusterStatus();
* // or below, a new version which has the same effects
* ClusterStatus status = admin.getClusterStatus(EnumSet.allOf(Option.class));
* {
* &#64;code
* // Original version still works
* Admin admin = connection.getAdmin();
* ClusterStatus status = admin.getClusterStatus();
* // or below, a new version which has the same effects
* ClusterStatus status = admin.getClusterStatus(EnumSet.allOf(Option.class));
* }
* </pre>
* If information about live servers is the only wanted.
* then codes in the following way:
*
* If information about live servers is the only wanted. then codes in the following way:
*
* <pre>
* {@code
* Admin admin = connection.getAdmin();
* ClusterStatus status = admin.getClusterStatus(EnumSet.of(Option.LIVE_SERVERS));
* {
* &#64;code
* Admin admin = connection.getAdmin();
* ClusterStatus status = admin.getClusterStatus(EnumSet.of(Option.LIVE_SERVERS));
* }
* </pre>
* @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0
* Use {@link ClusterMetrics} instead.
*
* @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0 Use {@link ClusterMetrics}
* instead.
*/
@InterfaceAudience.Public
@Deprecated
@ -86,26 +88,18 @@ public class ClusterStatus implements ClusterMetrics {
*/
@Deprecated
public ClusterStatus(final String hbaseVersion, final String clusterid,
final Map<ServerName, ServerLoad> servers,
final Collection<ServerName> deadServers,
final ServerName master,
final Collection<ServerName> backupMasters,
final List<RegionState> rit,
final String[] masterCoprocessors,
final Boolean balancerOn,
final int masterInfoPort) {
final Map<ServerName, ServerLoad> servers, final Collection<ServerName> deadServers,
final ServerName master, final Collection<ServerName> backupMasters,
final List<RegionState> rit, final String[] masterCoprocessors, final Boolean balancerOn,
final int masterInfoPort) {
// TODO: make this constructor private
this(ClusterMetricsBuilder.newBuilder().setHBaseVersion(hbaseVersion)
.setDeadServerNames(new ArrayList<>(deadServers))
.setLiveServerMetrics(servers.entrySet().stream()
.collect(Collectors.toMap(e -> e.getKey(), e -> e.getValue())))
.setLiveServerMetrics(
servers.entrySet().stream().collect(Collectors.toMap(e -> e.getKey(), e -> e.getValue())))
.setBackerMasterNames(new ArrayList<>(backupMasters)).setBalancerOn(balancerOn)
.setClusterId(clusterid)
.setMasterCoprocessorNames(Arrays.asList(masterCoprocessors))
.setMasterName(master)
.setMasterInfoPort(masterInfoPort)
.setRegionsInTransition(rit)
.build());
.setClusterId(clusterid).setMasterCoprocessorNames(Arrays.asList(masterCoprocessors))
.setMasterName(master).setMasterInfoPort(masterInfoPort).setRegionsInTransition(rit).build());
}
@InterfaceAudience.Private
@ -127,10 +121,10 @@ public class ClusterStatus implements ClusterMetrics {
}
/**
* @return the number of region servers in the cluster
* @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0
* Use {@link #getLiveServerMetrics()}.
*/
* @return the number of region servers in the cluster
* @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0 Use
* {@link #getLiveServerMetrics()}.
*/
@Deprecated
public int getServersSize() {
return metrics.getLiveServerMetrics().size();
@ -139,8 +133,8 @@ public class ClusterStatus implements ClusterMetrics {
/**
* @return the number of dead region servers in the cluster
* @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0
* (<a href="https://issues.apache.org/jira/browse/HBASE-13656">HBASE-13656</a>).
* Use {@link #getDeadServerNames()}.
* (<a href="https://issues.apache.org/jira/browse/HBASE-13656">HBASE-13656</a>). Use
* {@link #getDeadServerNames()}.
*/
@Deprecated
public int getDeadServers() {
@ -149,8 +143,8 @@ public class ClusterStatus implements ClusterMetrics {
/**
* @return the number of dead region servers in the cluster
* @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0
* Use {@link #getDeadServerNames()}.
* @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0 Use
* {@link #getDeadServerNames()}.
*/
@Deprecated
public int getDeadServersSize() {
@ -159,8 +153,8 @@ public class ClusterStatus implements ClusterMetrics {
/**
* @return the number of regions deployed on the cluster
* @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0
* Use {@link #getRegionCount()}.
* @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0 Use
* {@link #getRegionCount()}.
*/
@Deprecated
public int getRegionsCount() {
@ -169,8 +163,8 @@ public class ClusterStatus implements ClusterMetrics {
/**
* @return the number of requests since last report
* @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0
* Use {@link #getRequestCount()} instead.
* @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0 Use
* {@link #getRequestCount()} instead.
*/
@Deprecated
public int getRequestsCount() {
@ -214,14 +208,14 @@ public class ClusterStatus implements ClusterMetrics {
return false;
}
ClusterStatus other = (ClusterStatus) o;
return Objects.equal(getHBaseVersion(), other.getHBaseVersion()) &&
Objects.equal(getLiveServerLoads(), other.getLiveServerLoads()) &&
getDeadServerNames().containsAll(other.getDeadServerNames()) &&
Arrays.equals(getMasterCoprocessors(), other.getMasterCoprocessors()) &&
Objects.equal(getMaster(), other.getMaster()) &&
getBackupMasters().containsAll(other.getBackupMasters()) &&
Objects.equal(getClusterId(), other.getClusterId()) &&
getMasterInfoPort() == other.getMasterInfoPort();
return Objects.equal(getHBaseVersion(), other.getHBaseVersion())
&& Objects.equal(getLiveServerLoads(), other.getLiveServerLoads())
&& getDeadServerNames().containsAll(other.getDeadServerNames())
&& Arrays.equals(getMasterCoprocessors(), other.getMasterCoprocessors())
&& Objects.equal(getMaster(), other.getMaster())
&& getBackupMasters().containsAll(other.getBackupMasters())
&& Objects.equal(getClusterId(), other.getClusterId())
&& getMasterInfoPort() == other.getMasterInfoPort();
}
@Override
@ -239,8 +233,8 @@ public class ClusterStatus implements ClusterMetrics {
}
/**
* @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0
* Use {@link #getLiveServerMetrics()} instead.
* @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0 Use
* {@link #getLiveServerMetrics()} instead.
*/
@Deprecated
public Collection<ServerName> getServers() {
@ -250,8 +244,8 @@ public class ClusterStatus implements ClusterMetrics {
/**
* Returns detailed information about the current master {@link ServerName}.
* @return current master information if it exists
* @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0
* Use {@link #getMasterName} instead.
* @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0 Use {@link #getMasterName}
* instead.
*/
@Deprecated
public ServerName getMaster() {
@ -260,8 +254,8 @@ public class ClusterStatus implements ClusterMetrics {
/**
* @return the number of backup masters in the cluster
* @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0
* Use {@link #getBackupMasterNames} instead.
* @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0 Use
* {@link #getBackupMasterNames} instead.
*/
@Deprecated
public int getBackupMastersSize() {
@ -270,8 +264,8 @@ public class ClusterStatus implements ClusterMetrics {
/**
* @return the names of backup masters
* @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0
* Use {@link #getBackupMasterNames} instead.
* @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0 Use
* {@link #getBackupMasterNames} instead.
*/
@Deprecated
public List<ServerName> getBackupMasters() {
@ -279,10 +273,9 @@ public class ClusterStatus implements ClusterMetrics {
}
/**
* @param sn
* @return Server's load or null if not found.
* @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0
* Use {@link #getLiveServerMetrics} instead.
* n * @return Server's load or null if not found.
* @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0 Use
* {@link #getLiveServerMetrics} instead.
*/
@Deprecated
public ServerLoad getLoad(final ServerName sn) {
@ -300,8 +293,8 @@ public class ClusterStatus implements ClusterMetrics {
}
/**
* @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0
* Use {@link #getMasterCoprocessorNames} instead.
* @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0 Use
* {@link #getMasterCoprocessorNames} instead.
*/
@Deprecated
public String[] getMasterCoprocessors() {
@ -310,8 +303,8 @@ public class ClusterStatus implements ClusterMetrics {
}
/**
* @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0
* Use {@link #getLastMajorCompactionTimestamp(TableName)} instead.
* @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0 Use
* {@link #getLastMajorCompactionTimestamp(TableName)} instead.
*/
@Deprecated
public long getLastMajorCompactionTsForTable(TableName table) {
@ -319,8 +312,8 @@ public class ClusterStatus implements ClusterMetrics {
}
/**
* @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0
* Use {@link #getLastMajorCompactionTimestamp(byte[])} instead.
* @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0 Use
* {@link #getLastMajorCompactionTimestamp(byte[])} instead.
*/
@Deprecated
public long getLastMajorCompactionTsForRegion(final byte[] region) {
@ -328,8 +321,7 @@ public class ClusterStatus implements ClusterMetrics {
}
/**
* @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0
* No flag in 2.0
* @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0 No flag in 2.0
*/
@Deprecated
public boolean isBalancerOn() {
@ -369,15 +361,15 @@ public class ClusterStatus implements ClusterMetrics {
int backupMastersSize = getBackupMastersSize();
sb.append("\nNumber of backup masters: " + backupMastersSize);
if (backupMastersSize > 0) {
for (ServerName serverName: metrics.getBackupMasterNames()) {
for (ServerName serverName : metrics.getBackupMasterNames()) {
sb.append("\n " + serverName);
}
}
int serversSize = getServersSize();
int serversNameSize = getServersName().size();
sb.append("\nNumber of live region servers: "
+ (serversSize > 0 ? serversSize : serversNameSize));
sb.append(
"\nNumber of live region servers: " + (serversSize > 0 ? serversSize : serversNameSize));
if (serversSize > 0) {
for (ServerName serverName : metrics.getLiveServerMetrics().keySet()) {
sb.append("\n " + serverName.getServerName());
@ -403,7 +395,7 @@ public class ClusterStatus implements ClusterMetrics {
int ritSize = metrics.getRegionStatesInTransition().size();
sb.append("\nNumber of regions in transition: " + ritSize);
if (ritSize > 0) {
for (RegionState state: metrics.getRegionStatesInTransition()) {
for (RegionState state : metrics.getRegionStatesInTransition()) {
sb.append("\n " + state.toDescriptiveString());
}
}

View File

@ -1,5 +1,4 @@
/**
*
/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information

View File

@ -7,33 +7,28 @@
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.hadoop.hbase;
import com.google.protobuf.Service;
import java.io.IOException;
import java.util.Collections;
import com.google.protobuf.Service;
import org.apache.yetus.audience.InterfaceAudience;
import org.apache.yetus.audience.InterfaceStability;
/**
* Base interface for the 4 coprocessors - MasterCoprocessor, RegionCoprocessor,
* RegionServerCoprocessor, and WALCoprocessor.
* Do NOT implement this interface directly. Unless an implementation implements one (or more) of
* the above mentioned 4 coprocessors, it'll fail to be loaded by any coprocessor host.
* RegionServerCoprocessor, and WALCoprocessor. Do NOT implement this interface directly. Unless an
* implementation implements one (or more) of the above mentioned 4 coprocessors, it'll fail to be
* loaded by any coprocessor host. Example: Building a coprocessor to observe Master operations.
*
* Example:
* Building a coprocessor to observe Master operations.
* <pre>
* class MyMasterCoprocessor implements MasterCoprocessor {
* &#64;Override
@ -48,6 +43,7 @@ import org.apache.yetus.audience.InterfaceStability;
* </pre>
*
* Building a Service which can be loaded by both Master and RegionServer
*
* <pre>
* class MyCoprocessorService implements MasterCoprocessor, RegionServerCoprocessor {
* &#64;Override
@ -87,18 +83,19 @@ public interface Coprocessor {
* Called by the {@link CoprocessorEnvironment} during it's own startup to initialize the
* coprocessor.
*/
default void start(CoprocessorEnvironment env) throws IOException {}
default void start(CoprocessorEnvironment env) throws IOException {
}
/**
* Called by the {@link CoprocessorEnvironment} during it's own shutdown to stop the
* coprocessor.
* Called by the {@link CoprocessorEnvironment} during it's own shutdown to stop the coprocessor.
*/
default void stop(CoprocessorEnvironment env) throws IOException {}
default void stop(CoprocessorEnvironment env) throws IOException {
}
/**
* Coprocessor endpoints providing protobuf services should override this method.
* @return Iterable of {@link Service}s or empty collection. Implementations should never
* return null.
* @return Iterable of {@link Service}s or empty collection. Implementations should never return
* null.
*/
default Iterable<Service> getServices() {
return Collections.EMPTY_SET;

View File

@ -7,16 +7,14 @@
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.hadoop.hbase;
import org.apache.hadoop.conf.Configuration;
@ -46,8 +44,8 @@ public interface CoprocessorEnvironment<C extends Coprocessor> {
int getLoadSequence();
/**
* @return a Read-only Configuration; throws {@link UnsupportedOperationException} if you try
* to set a configuration.
* @return a Read-only Configuration; throws {@link UnsupportedOperationException} if you try to
* set a configuration.
*/
Configuration getConfiguration();

View File

@ -1,5 +1,4 @@
/**
*
/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
@ -41,7 +40,7 @@ public class DoNotRetryIOException extends HBaseIOException {
}
/**
* @param message the message for this exception
* @param message the message for this exception
* @param throwable the {@link Throwable} to use for this exception
*/
public DoNotRetryIOException(String message, Throwable throwable) {

View File

@ -7,24 +7,22 @@
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.hadoop.hbase;
import java.io.IOException;
import org.apache.yetus.audience.InterfaceAudience;
/**
* Thrown during flush if the possibility snapshot content was not properly
* persisted into store files. Response should include replay of wal content.
* Thrown during flush if the possibility snapshot content was not properly persisted into store
* files. Response should include replay of wal content.
*/
@InterfaceAudience.Public
public class DroppedSnapshotException extends IOException {
@ -43,9 +41,8 @@ public class DroppedSnapshotException extends IOException {
/**
* DroppedSnapshotException with cause
*
* @param message the message for this exception
* @param cause the cause for this exception
* @param cause the cause for this exception
*/
public DroppedSnapshotException(String message, Throwable cause) {
super(message, cause);

View File

@ -7,35 +7,31 @@
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.hadoop.hbase;
import java.io.IOException;
import org.apache.yetus.audience.InterfaceAudience;
/**
* Throw when failed cleanup unsuccessful initialized wal
*/
@InterfaceAudience.Public
public class FailedCloseWALAfterInitializedErrorException
extends IOException {
public class FailedCloseWALAfterInitializedErrorException extends IOException {
private static final long serialVersionUID = -5463156587431677322L;
/**
* constructor with error msg and throwable
* @param msg message
* @param t throwable
* @param t throwable
*/
public FailedCloseWALAfterInitializedErrorException(String msg, Throwable t) {
super(msg, t);
@ -55,4 +51,4 @@ public class FailedCloseWALAfterInitializedErrorException
public FailedCloseWALAfterInitializedErrorException() {
super();
}
}
}

View File

@ -1,4 +1,4 @@
/**
/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
@ -20,8 +20,8 @@ package org.apache.hadoop.hbase;
import org.apache.yetus.audience.InterfaceAudience;
/**
* Base class for exceptions thrown by an HBase server. May contain extra info about
* the state of the server when the exception was thrown.
* Base class for exceptions thrown by an HBase server. May contain extra info about the state of
* the server when the exception was thrown.
*/
@InterfaceAudience.Public
public class HBaseServerException extends HBaseIOException {

View File

@ -1,5 +1,4 @@
/**
*
/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
@ -19,8 +18,6 @@
package org.apache.hadoop.hbase;
import java.util.Map;
import org.apache.yetus.audience.InterfaceAudience;
import org.apache.hadoop.hbase.client.ColumnFamilyDescriptor;
import org.apache.hadoop.hbase.client.ColumnFamilyDescriptorBuilder;
import org.apache.hadoop.hbase.client.ColumnFamilyDescriptorBuilder.ModifyableColumnFamilyDescriptor;
@ -32,30 +29,39 @@ import org.apache.hadoop.hbase.io.encoding.DataBlockEncoding;
import org.apache.hadoop.hbase.regionserver.BloomType;
import org.apache.hadoop.hbase.util.Bytes;
import org.apache.hadoop.hbase.util.PrettyPrinter.Unit;
import org.apache.yetus.audience.InterfaceAudience;
/**
* An HColumnDescriptor contains information about a column family such as the
* number of versions, compression settings, etc.
*
* It is used as input when creating a table or adding a column.
* An HColumnDescriptor contains information about a column family such as the number of versions,
* compression settings, etc. It is used as input when creating a table or adding a column.
*/
@InterfaceAudience.Public
@Deprecated // remove it in 3.0
public class HColumnDescriptor implements ColumnFamilyDescriptor, Comparable<HColumnDescriptor> {
public static final String IN_MEMORY_COMPACTION = ColumnFamilyDescriptorBuilder.IN_MEMORY_COMPACTION;
public static final String IN_MEMORY_COMPACTION =
ColumnFamilyDescriptorBuilder.IN_MEMORY_COMPACTION;
public static final String COMPRESSION = ColumnFamilyDescriptorBuilder.COMPRESSION;
public static final String COMPRESSION_COMPACT = ColumnFamilyDescriptorBuilder.COMPRESSION_COMPACT;
public static final String COMPRESSION_COMPACT_MAJOR = ColumnFamilyDescriptorBuilder.COMPRESSION_COMPACT_MAJOR;
public static final String COMPRESSION_COMPACT_MINOR = ColumnFamilyDescriptorBuilder.COMPRESSION_COMPACT_MINOR;
public static final String COMPRESSION_COMPACT =
ColumnFamilyDescriptorBuilder.COMPRESSION_COMPACT;
public static final String COMPRESSION_COMPACT_MAJOR =
ColumnFamilyDescriptorBuilder.COMPRESSION_COMPACT_MAJOR;
public static final String COMPRESSION_COMPACT_MINOR =
ColumnFamilyDescriptorBuilder.COMPRESSION_COMPACT_MINOR;
public static final String ENCODE_ON_DISK = "ENCODE_ON_DISK";
public static final String DATA_BLOCK_ENCODING = ColumnFamilyDescriptorBuilder.DATA_BLOCK_ENCODING;
public static final String DATA_BLOCK_ENCODING =
ColumnFamilyDescriptorBuilder.DATA_BLOCK_ENCODING;
public static final String BLOCKCACHE = ColumnFamilyDescriptorBuilder.BLOCKCACHE;
public static final String CACHE_DATA_ON_WRITE = ColumnFamilyDescriptorBuilder.CACHE_DATA_ON_WRITE;
public static final String CACHE_INDEX_ON_WRITE = ColumnFamilyDescriptorBuilder.CACHE_INDEX_ON_WRITE;
public static final String CACHE_BLOOMS_ON_WRITE = ColumnFamilyDescriptorBuilder.CACHE_BLOOMS_ON_WRITE;
public static final String EVICT_BLOCKS_ON_CLOSE = ColumnFamilyDescriptorBuilder.EVICT_BLOCKS_ON_CLOSE;
public static final String CACHE_DATA_ON_WRITE =
ColumnFamilyDescriptorBuilder.CACHE_DATA_ON_WRITE;
public static final String CACHE_INDEX_ON_WRITE =
ColumnFamilyDescriptorBuilder.CACHE_INDEX_ON_WRITE;
public static final String CACHE_BLOOMS_ON_WRITE =
ColumnFamilyDescriptorBuilder.CACHE_BLOOMS_ON_WRITE;
public static final String EVICT_BLOCKS_ON_CLOSE =
ColumnFamilyDescriptorBuilder.EVICT_BLOCKS_ON_CLOSE;
public static final String CACHE_DATA_IN_L1 = "CACHE_DATA_IN_L1";
public static final String PREFETCH_BLOCKS_ON_OPEN = ColumnFamilyDescriptorBuilder.PREFETCH_BLOCKS_ON_OPEN;
public static final String PREFETCH_BLOCKS_ON_OPEN =
ColumnFamilyDescriptorBuilder.PREFETCH_BLOCKS_ON_OPEN;
public static final String BLOCKSIZE = ColumnFamilyDescriptorBuilder.BLOCKSIZE;
public static final String LENGTH = "LENGTH";
public static final String TTL = ColumnFamilyDescriptorBuilder.TTL;
@ -72,46 +78,62 @@ public class HColumnDescriptor implements ColumnFamilyDescriptor, Comparable<HCo
public static final byte[] IS_MOB_BYTES = Bytes.toBytes(IS_MOB);
public static final String MOB_THRESHOLD = ColumnFamilyDescriptorBuilder.MOB_THRESHOLD;
public static final byte[] MOB_THRESHOLD_BYTES = Bytes.toBytes(MOB_THRESHOLD);
public static final long DEFAULT_MOB_THRESHOLD = ColumnFamilyDescriptorBuilder.DEFAULT_MOB_THRESHOLD;
public static final String MOB_COMPACT_PARTITION_POLICY = ColumnFamilyDescriptorBuilder.MOB_COMPACT_PARTITION_POLICY;
public static final byte[] MOB_COMPACT_PARTITION_POLICY_BYTES = Bytes.toBytes(MOB_COMPACT_PARTITION_POLICY);
public static final MobCompactPartitionPolicy DEFAULT_MOB_COMPACT_PARTITION_POLICY
= ColumnFamilyDescriptorBuilder.DEFAULT_MOB_COMPACT_PARTITION_POLICY;
public static final long DEFAULT_MOB_THRESHOLD =
ColumnFamilyDescriptorBuilder.DEFAULT_MOB_THRESHOLD;
public static final String MOB_COMPACT_PARTITION_POLICY =
ColumnFamilyDescriptorBuilder.MOB_COMPACT_PARTITION_POLICY;
public static final byte[] MOB_COMPACT_PARTITION_POLICY_BYTES =
Bytes.toBytes(MOB_COMPACT_PARTITION_POLICY);
public static final MobCompactPartitionPolicy DEFAULT_MOB_COMPACT_PARTITION_POLICY =
ColumnFamilyDescriptorBuilder.DEFAULT_MOB_COMPACT_PARTITION_POLICY;
public static final String DFS_REPLICATION = ColumnFamilyDescriptorBuilder.DFS_REPLICATION;
public static final short DEFAULT_DFS_REPLICATION = ColumnFamilyDescriptorBuilder.DEFAULT_DFS_REPLICATION;
public static final short DEFAULT_DFS_REPLICATION =
ColumnFamilyDescriptorBuilder.DEFAULT_DFS_REPLICATION;
public static final String STORAGE_POLICY = ColumnFamilyDescriptorBuilder.STORAGE_POLICY;
public static final String DEFAULT_COMPRESSION = ColumnFamilyDescriptorBuilder.DEFAULT_COMPRESSION.name();
public static final String DEFAULT_COMPRESSION =
ColumnFamilyDescriptorBuilder.DEFAULT_COMPRESSION.name();
public static final boolean DEFAULT_ENCODE_ON_DISK = true;
public static final String DEFAULT_DATA_BLOCK_ENCODING = ColumnFamilyDescriptorBuilder.DEFAULT_DATA_BLOCK_ENCODING.name();
public static final String DEFAULT_DATA_BLOCK_ENCODING =
ColumnFamilyDescriptorBuilder.DEFAULT_DATA_BLOCK_ENCODING.name();
public static final int DEFAULT_VERSIONS = ColumnFamilyDescriptorBuilder.DEFAULT_MAX_VERSIONS;
public static final int DEFAULT_MIN_VERSIONS = ColumnFamilyDescriptorBuilder.DEFAULT_MIN_VERSIONS;
public static final boolean DEFAULT_IN_MEMORY = ColumnFamilyDescriptorBuilder.DEFAULT_IN_MEMORY;
public static final KeepDeletedCells DEFAULT_KEEP_DELETED = ColumnFamilyDescriptorBuilder.DEFAULT_KEEP_DELETED;
public static final KeepDeletedCells DEFAULT_KEEP_DELETED =
ColumnFamilyDescriptorBuilder.DEFAULT_KEEP_DELETED;
public static final boolean DEFAULT_BLOCKCACHE = ColumnFamilyDescriptorBuilder.DEFAULT_BLOCKCACHE;
public static final boolean DEFAULT_CACHE_DATA_ON_WRITE = ColumnFamilyDescriptorBuilder.DEFAULT_CACHE_DATA_ON_WRITE;
public static final boolean DEFAULT_CACHE_DATA_ON_WRITE =
ColumnFamilyDescriptorBuilder.DEFAULT_CACHE_DATA_ON_WRITE;
public static final boolean DEFAULT_CACHE_DATA_IN_L1 = false;
public static final boolean DEFAULT_CACHE_INDEX_ON_WRITE = ColumnFamilyDescriptorBuilder.DEFAULT_CACHE_INDEX_ON_WRITE;
public static final boolean DEFAULT_CACHE_INDEX_ON_WRITE =
ColumnFamilyDescriptorBuilder.DEFAULT_CACHE_INDEX_ON_WRITE;
public static final int DEFAULT_BLOCKSIZE = ColumnFamilyDescriptorBuilder.DEFAULT_BLOCKSIZE;
public static final String DEFAULT_BLOOMFILTER = ColumnFamilyDescriptorBuilder.DEFAULT_BLOOMFILTER.name();
public static final boolean DEFAULT_CACHE_BLOOMS_ON_WRITE = ColumnFamilyDescriptorBuilder.DEFAULT_CACHE_BLOOMS_ON_WRITE;
public static final String DEFAULT_BLOOMFILTER =
ColumnFamilyDescriptorBuilder.DEFAULT_BLOOMFILTER.name();
public static final boolean DEFAULT_CACHE_BLOOMS_ON_WRITE =
ColumnFamilyDescriptorBuilder.DEFAULT_CACHE_BLOOMS_ON_WRITE;
public static final int DEFAULT_TTL = ColumnFamilyDescriptorBuilder.DEFAULT_TTL;
public static final int DEFAULT_REPLICATION_SCOPE = ColumnFamilyDescriptorBuilder.DEFAULT_REPLICATION_SCOPE;
public static final boolean DEFAULT_EVICT_BLOCKS_ON_CLOSE = ColumnFamilyDescriptorBuilder.DEFAULT_EVICT_BLOCKS_ON_CLOSE;
public static final boolean DEFAULT_COMPRESS_TAGS = ColumnFamilyDescriptorBuilder.DEFAULT_COMPRESS_TAGS;
public static final boolean DEFAULT_PREFETCH_BLOCKS_ON_OPEN = ColumnFamilyDescriptorBuilder.DEFAULT_PREFETCH_BLOCKS_ON_OPEN;
public static final String NEW_VERSION_BEHAVIOR = ColumnFamilyDescriptorBuilder.NEW_VERSION_BEHAVIOR;
public static final boolean DEFAULT_NEW_VERSION_BEHAVIOR = ColumnFamilyDescriptorBuilder.DEFAULT_NEW_VERSION_BEHAVIOR;
public static final int DEFAULT_REPLICATION_SCOPE =
ColumnFamilyDescriptorBuilder.DEFAULT_REPLICATION_SCOPE;
public static final boolean DEFAULT_EVICT_BLOCKS_ON_CLOSE =
ColumnFamilyDescriptorBuilder.DEFAULT_EVICT_BLOCKS_ON_CLOSE;
public static final boolean DEFAULT_COMPRESS_TAGS =
ColumnFamilyDescriptorBuilder.DEFAULT_COMPRESS_TAGS;
public static final boolean DEFAULT_PREFETCH_BLOCKS_ON_OPEN =
ColumnFamilyDescriptorBuilder.DEFAULT_PREFETCH_BLOCKS_ON_OPEN;
public static final String NEW_VERSION_BEHAVIOR =
ColumnFamilyDescriptorBuilder.NEW_VERSION_BEHAVIOR;
public static final boolean DEFAULT_NEW_VERSION_BEHAVIOR =
ColumnFamilyDescriptorBuilder.DEFAULT_NEW_VERSION_BEHAVIOR;
protected final ModifyableColumnFamilyDescriptor delegatee;
/**
* Construct a column descriptor specifying only the family name
* The other attributes are defaulted.
*
* @param familyName Column family name. Must be 'printable' -- digit or
* letter -- and may not contain a <code>:</code>
* Construct a column descriptor specifying only the family name The other attributes are
* defaulted.
* @param familyName Column family name. Must be 'printable' -- digit or letter -- and may not
* contain a <code>:</code>
* @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0
* (<a href="https://issues.apache.org/jira/browse/HBASE-18433">HBASE-18433</a>).
* Use {@link ColumnFamilyDescriptorBuilder#of(String)}.
* (<a href="https://issues.apache.org/jira/browse/HBASE-18433">HBASE-18433</a>). Use
* {@link ColumnFamilyDescriptorBuilder#of(String)}.
*/
@Deprecated
public HColumnDescriptor(final String familyName) {
@ -119,29 +141,26 @@ public class HColumnDescriptor implements ColumnFamilyDescriptor, Comparable<HCo
}
/**
* Construct a column descriptor specifying only the family name
* The other attributes are defaulted.
*
* @param familyName Column family name. Must be 'printable' -- digit or
* letter -- and may not contain a <code>:</code>
* Construct a column descriptor specifying only the family name The other attributes are
* defaulted.
* @param familyName Column family name. Must be 'printable' -- digit or letter -- and may not
* contain a <code>:</code>
* @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0
* (<a href="https://issues.apache.org/jira/browse/HBASE-18433">HBASE-18433</a>).
* Use {@link ColumnFamilyDescriptorBuilder#of(byte[])}.
* (<a href="https://issues.apache.org/jira/browse/HBASE-18433">HBASE-18433</a>). Use
* {@link ColumnFamilyDescriptorBuilder#of(byte[])}.
*/
@Deprecated
public HColumnDescriptor(final byte [] familyName) {
public HColumnDescriptor(final byte[] familyName) {
this(new ModifyableColumnFamilyDescriptor(familyName));
}
/**
* Constructor.
* Makes a deep copy of the supplied descriptor.
* Can make a modifiable descriptor from an UnmodifyableHColumnDescriptor.
*
* Constructor. Makes a deep copy of the supplied descriptor. Can make a modifiable descriptor
* from an UnmodifyableHColumnDescriptor.
* @param desc The descriptor.
* @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0
* (<a href="https://issues.apache.org/jira/browse/HBASE-18433">HBASE-18433</a>).
* Use {@link ColumnFamilyDescriptorBuilder#copy(ColumnFamilyDescriptor)}.
* (<a href="https://issues.apache.org/jira/browse/HBASE-18433">HBASE-18433</a>). Use
* {@link ColumnFamilyDescriptorBuilder#copy(ColumnFamilyDescriptor)}.
*/
@Deprecated
public HColumnDescriptor(HColumnDescriptor desc) {
@ -149,8 +168,7 @@ public class HColumnDescriptor implements ColumnFamilyDescriptor, Comparable<HCo
}
protected HColumnDescriptor(HColumnDescriptor desc, boolean deepClone) {
this(deepClone ? new ModifyableColumnFamilyDescriptor(desc)
: desc.delegatee);
this(deepClone ? new ModifyableColumnFamilyDescriptor(desc) : desc.delegatee);
}
protected HColumnDescriptor(ModifyableColumnFamilyDescriptor delegate) {
@ -160,17 +178,18 @@ public class HColumnDescriptor implements ColumnFamilyDescriptor, Comparable<HCo
/**
* @param b Family name.
* @return <code>b</code>
* @throws IllegalArgumentException If not null and not a legitimate family
* name: i.e. 'printable' and ends in a ':' (Null passes are allowed because
* <code>b</code> can be null when deserializing). Cannot start with a '.'
* either. Also Family can not be an empty value or equal "recovered.edits".
* @throws IllegalArgumentException If not null and not a legitimate family name: i.e. 'printable'
* and ends in a ':' (Null passes are allowed because
* <code>b</code> can be null when deserializing). Cannot start
* with a '.' either. Also Family can not be an empty value or
* equal "recovered.edits".
* @deprecated since 2.0.0 and will be removed in 3.0.0. Use
* {@link ColumnFamilyDescriptorBuilder#isLegalColumnFamilyName(byte[])} instead.
* {@link ColumnFamilyDescriptorBuilder#isLegalColumnFamilyName(byte[])} instead.
* @see ColumnFamilyDescriptorBuilder#isLegalColumnFamilyName(byte[])
* @see <a href="https://issues.apache.org/jira/browse/HBASE-18008">HBASE-18008</a>
*/
@Deprecated
public static byte [] isLegalFamilyName(final byte [] b) {
public static byte[] isLegalFamilyName(final byte[] b) {
return ColumnFamilyDescriptorBuilder.isLegalColumnFamilyName(b);
}
@ -178,7 +197,7 @@ public class HColumnDescriptor implements ColumnFamilyDescriptor, Comparable<HCo
* @return Name of this column family
*/
@Override
public byte [] getName() {
public byte[] getName() {
return delegatee.getName();
}
@ -214,7 +233,7 @@ public class HColumnDescriptor implements ColumnFamilyDescriptor, Comparable<HCo
}
/**
* @param key The key.
* @param key The key.
* @param value The value.
* @return this (for chained invocation)
*/
@ -226,12 +245,12 @@ public class HColumnDescriptor implements ColumnFamilyDescriptor, Comparable<HCo
/**
* @param key Key whose key and value we're to remove from HCD parameters.
*/
public void remove(final byte [] key) {
public void remove(final byte[] key) {
getDelegateeForModification().removeValue(new Bytes(key));
}
/**
* @param key The key.
* @param key The key.
* @param value The value.
* @return this (for chained invocation)
*/
@ -243,8 +262,8 @@ public class HColumnDescriptor implements ColumnFamilyDescriptor, Comparable<HCo
/**
* @return compression type being used for the column family
* @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0
* (<a href="https://issues.apache.org/jira/browse/HBASE-13655">HBASE-13655</a>).
* Use {@link #getCompressionType()}.
* (<a href="https://issues.apache.org/jira/browse/HBASE-13655">HBASE-13655</a>). Use
* {@link #getCompressionType()}.
*/
@Deprecated
public Compression.Algorithm getCompression() {
@ -252,10 +271,10 @@ public class HColumnDescriptor implements ColumnFamilyDescriptor, Comparable<HCo
}
/**
* @return compression type being used for the column family for major compaction
* @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0
* (<a href="https://issues.apache.org/jira/browse/HBASE-13655">HBASE-13655</a>).
* Use {@link #getCompactionCompressionType()}.
* @return compression type being used for the column family for major compaction
* @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0
* (<a href="https://issues.apache.org/jira/browse/HBASE-13655">HBASE-13655</a>). Use
* {@link #getCompactionCompressionType()}.
*/
@Deprecated
public Compression.Algorithm getCompactionCompression() {
@ -278,7 +297,6 @@ public class HColumnDescriptor implements ColumnFamilyDescriptor, Comparable<HCo
/**
* Set minimum and maximum versions to keep
*
* @param minVersions minimal number of versions
* @param maxVersions maximum number of versions
* @return this (for chained invocation)
@ -291,9 +309,9 @@ public class HColumnDescriptor implements ColumnFamilyDescriptor, Comparable<HCo
}
if (maxVersions < minVersions) {
throw new IllegalArgumentException("Unable to set MaxVersion to " + maxVersions
+ " and set MinVersion to " + minVersions
+ ", as maximum versions must be >= minimum versions.");
throw new IllegalArgumentException(
"Unable to set MaxVersion to " + maxVersions + " and set MinVersion to " + minVersions
+ ", as maximum versions must be >= minimum versions.");
}
setMinVersions(minVersions);
setMaxVersions(maxVersions);
@ -306,8 +324,7 @@ public class HColumnDescriptor implements ColumnFamilyDescriptor, Comparable<HCo
}
/**
* @param value Blocksize to use when writing out storefiles/hfiles on this
* column family.
* @param value Blocksize to use when writing out storefiles/hfiles on this column family.
* @return this (for chained invocation)
*/
public HColumnDescriptor setBlocksize(int value) {
@ -326,10 +343,9 @@ public class HColumnDescriptor implements ColumnFamilyDescriptor, Comparable<HCo
}
/**
* Compression types supported in hbase.
* LZO is not bundled as part of the hbase distribution.
* See <a href="http://wiki.apache.org/hadoop/UsingLzoCompression">LZO Compression</a>
* for how to enable it.
* Compression types supported in hbase. LZO is not bundled as part of the hbase distribution. See
* <a href="http://wiki.apache.org/hadoop/UsingLzoCompression">LZO Compression</a> for how to
* enable it.
* @param value Compression type setting.
* @return this (for chained invocation)
*/
@ -355,10 +371,8 @@ public class HColumnDescriptor implements ColumnFamilyDescriptor, Comparable<HCo
/**
* Set whether the tags should be compressed along with DataBlockEncoding. When no
* DataBlockEncoding is been used, this is having no effect.
*
* @param value
* @return this (for chained invocation)
* DataBlockEncoding is been used, this is having no effect. n * @return this (for chained
* invocation)
*/
public HColumnDescriptor setCompressTags(boolean value) {
getDelegateeForModification().setCompressTags(value);
@ -386,10 +400,9 @@ public class HColumnDescriptor implements ColumnFamilyDescriptor, Comparable<HCo
}
/**
* Compression types supported in hbase.
* LZO is not bundled as part of the hbase distribution.
* See <a href="http://wiki.apache.org/hadoop/UsingLzoCompression">LZO Compression</a>
* for how to enable it.
* Compression types supported in hbase. LZO is not bundled as part of the hbase distribution. See
* <a href="http://wiki.apache.org/hadoop/UsingLzoCompression">LZO Compression</a> for how to
* enable it.
* @param value Compression type setting.
* @return this (for chained invocation)
*/
@ -415,7 +428,7 @@ public class HColumnDescriptor implements ColumnFamilyDescriptor, Comparable<HCo
/**
* @param value True if we are to favor keeping all values for this column family in the
* HRegionServer cache
* HRegionServer cache
* @return this (for chained invocation)
*/
public HColumnDescriptor setInMemory(boolean value) {
@ -429,8 +442,7 @@ public class HColumnDescriptor implements ColumnFamilyDescriptor, Comparable<HCo
}
/**
* @param value the prefered in-memory compaction policy
* for this column family
* @param value the prefered in-memory compaction policy for this column family
* @return this (for chained invocation)
*/
public HColumnDescriptor setInMemoryCompaction(MemoryCompactionPolicy value) {
@ -444,8 +456,7 @@ public class HColumnDescriptor implements ColumnFamilyDescriptor, Comparable<HCo
}
/**
* @param value True if deleted rows should not be collected
* immediately.
* @param value True if deleted rows should not be collected immediately.
* @return this (for chained invocation)
*/
public HColumnDescriptor setKeepDeletedCells(KeepDeletedCells value) {
@ -454,9 +465,9 @@ public class HColumnDescriptor implements ColumnFamilyDescriptor, Comparable<HCo
}
/**
* By default, HBase only consider timestamp in versions. So a previous Delete with higher ts
* will mask a later Put with lower ts. Set this to true to enable new semantics of versions.
* We will also consider mvcc in versions. See HBASE-15968 for details.
* By default, HBase only consider timestamp in versions. So a previous Delete with higher ts will
* mask a later Put with lower ts. Set this to true to enable new semantics of versions. We will
* also consider mvcc in versions. See HBASE-15968 for details.
*/
@Override
public boolean isNewVersionBehavior() {
@ -468,7 +479,6 @@ public class HColumnDescriptor implements ColumnFamilyDescriptor, Comparable<HCo
return this;
}
@Override
public int getTimeToLive() {
return delegatee.getTimeToLive();
@ -485,7 +495,7 @@ public class HColumnDescriptor implements ColumnFamilyDescriptor, Comparable<HCo
/**
* @param value Time to live of cell contents, in human readable format
* @see org.apache.hadoop.hbase.util.PrettyPrinter#format(String, Unit)
* @see org.apache.hadoop.hbase.util.PrettyPrinter#format(String, Unit)
* @return this (for chained invocation)
*/
public HColumnDescriptor setTimeToLive(String value) throws HBaseException {
@ -499,8 +509,7 @@ public class HColumnDescriptor implements ColumnFamilyDescriptor, Comparable<HCo
}
/**
* @param value The minimum number of versions to keep.
* (used when timeToLive is set)
* @param value The minimum number of versions to keep. (used when timeToLive is set)
* @return this (for chained invocation)
*/
public HColumnDescriptor setMinVersions(int value) {
@ -514,8 +523,8 @@ public class HColumnDescriptor implements ColumnFamilyDescriptor, Comparable<HCo
}
/**
* @param value True if hfile DATA type blocks should be cached (We always cache
* INDEX and BLOOM blocks; you cannot turn this off).
* @param value True if hfile DATA type blocks should be cached (We always cache INDEX and BLOOM
* blocks; you cannot turn this off).
* @return this (for chained invocation)
*/
public HColumnDescriptor setBlockCacheEnabled(boolean value) {
@ -542,10 +551,10 @@ public class HColumnDescriptor implements ColumnFamilyDescriptor, Comparable<HCo
return delegatee.getScope();
}
/**
* @param value the scope tag
* @return this (for chained invocation)
*/
/**
* @param value the scope tag
* @return this (for chained invocation)
*/
public HColumnDescriptor setScope(int value) {
getDelegateeForModification().setScope(value);
return this;
@ -567,7 +576,6 @@ public class HColumnDescriptor implements ColumnFamilyDescriptor, Comparable<HCo
/**
* This is a noop call from HBase 2.0 onwards
*
* @return this (for chained invocation)
* @deprecated Since 2.0 and will be removed in 3.0 with out any replacement. Caching data in on
* heap Cache, when there are both on heap LRU Cache and Bucket Cache will no longer
@ -612,8 +620,7 @@ public class HColumnDescriptor implements ColumnFamilyDescriptor, Comparable<HCo
}
/**
* @param value true if we should evict cached blocks from the blockcache on
* close
* @param value true if we should evict cached blocks from the blockcache on close
* @return this (for chained invocation)
*/
public HColumnDescriptor setEvictBlocksOnClose(boolean value) {
@ -696,11 +703,10 @@ public class HColumnDescriptor implements ColumnFamilyDescriptor, Comparable<HCo
/**
* @param bytes A pb serialized {@link HColumnDescriptor} instance with pb magic prefix
* @return An instance of {@link HColumnDescriptor} made from <code>bytes</code>
* @throws DeserializationException
* @see #toByteArray()
* @return An instance of {@link HColumnDescriptor} made from <code>bytes</code> n * @see
* #toByteArray()
*/
public static HColumnDescriptor parseFrom(final byte [] bytes) throws DeserializationException {
public static HColumnDescriptor parseFrom(final byte[] bytes) throws DeserializationException {
ColumnFamilyDescriptor desc = ColumnFamilyDescriptorBuilder.parseFrom(bytes);
if (desc instanceof ModifyableColumnFamilyDescriptor) {
return new HColumnDescriptor((ModifyableColumnFamilyDescriptor) desc);
@ -721,7 +727,7 @@ public class HColumnDescriptor implements ColumnFamilyDescriptor, Comparable<HCo
/**
* Setter for storing a configuration setting.
* @param key Config key. Same as XML config key e.g. hbase.something.or.other.
* @param key Config key. Same as XML config key e.g. hbase.something.or.other.
* @param value String value. If null, removes the configuration.
*/
public HColumnDescriptor setConfiguration(String key, String value) {
@ -742,8 +748,7 @@ public class HColumnDescriptor implements ColumnFamilyDescriptor, Comparable<HCo
}
/**
* Set the encryption algorithm for use with this family
* @param value
* Set the encryption algorithm for use with this family n
*/
public HColumnDescriptor setEncryptionType(String value) {
getDelegateeForModification().setEncryptionType(value);
@ -814,8 +819,8 @@ public class HColumnDescriptor implements ColumnFamilyDescriptor, Comparable<HCo
/**
* Set the replication factor to hfile(s) belonging to this family
* @param value number of replicas the blocks(s) belonging to this CF should have, or
* {@link #DEFAULT_DFS_REPLICATION} for the default replication factor set in the
* filesystem
* {@link #DEFAULT_DFS_REPLICATION} for the default replication factor set in the
* filesystem
* @return this (for chained invocation)
*/
public HColumnDescriptor setDFSReplication(short value) {
@ -831,7 +836,7 @@ public class HColumnDescriptor implements ColumnFamilyDescriptor, Comparable<HCo
/**
* Set the storage policy for use with this family
* @param value the policy to set, valid setting includes: <i>"LAZY_PERSIST"</i>,
* <i>"ALL_SSD"</i>, <i>"ONE_SSD"</i>, <i>"HOT"</i>, <i>"WARM"</i>, <i>"COLD"</i>
* <i>"ALL_SSD"</i>, <i>"ONE_SSD"</i>, <i>"HOT"</i>, <i>"WARM"</i>, <i>"COLD"</i>
*/
public HColumnDescriptor setStoragePolicy(String value) {
getDelegateeForModification().setStoragePolicy(value);

View File

@ -1,5 +1,4 @@
/**
*
/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
@ -24,17 +23,13 @@ import org.apache.hadoop.hbase.util.Addressing;
import org.apache.yetus.audience.InterfaceAudience;
/**
* Data structure to hold RegionInfo and the address for the hosting
* HRegionServer. Immutable. Comparable, but we compare the 'location' only:
* i.e. the hostname and port, and *not* the regioninfo. This means two
* instances are the same if they refer to the same 'location' (the same
* hostname and port), though they may be carrying different regions.
*
* On a big cluster, each client will have thousands of instances of this object, often
* 100 000 of them if not million. It's important to keep the object size as small
* as possible.
*
* <br>This interface has been marked InterfaceAudience.Public in 0.96 and 0.98.
* Data structure to hold RegionInfo and the address for the hosting HRegionServer. Immutable.
* Comparable, but we compare the 'location' only: i.e. the hostname and port, and *not* the
* regioninfo. This means two instances are the same if they refer to the same 'location' (the same
* hostname and port), though they may be carrying different regions. On a big cluster, each client
* will have thousands of instances of this object, often 100 000 of them if not million. It's
* important to keep the object size as small as possible. <br>
* This interface has been marked InterfaceAudience.Public in 0.96 and 0.98.
*/
@InterfaceAudience.Public
public class HRegionLocation implements Comparable<HRegionLocation> {
@ -58,7 +53,7 @@ public class HRegionLocation implements Comparable<HRegionLocation> {
@Override
public String toString() {
return "region=" + (this.regionInfo == null ? "null" : this.regionInfo.getRegionNameAsString())
+ ", hostname=" + this.serverName + ", seqNum=" + seqNum;
+ ", hostname=" + this.serverName + ", seqNum=" + seqNum;
}
/**
@ -75,7 +70,7 @@ public class HRegionLocation implements Comparable<HRegionLocation> {
if (!(o instanceof HRegionLocation)) {
return false;
}
return this.compareTo((HRegionLocation)o) == 0;
return this.compareTo((HRegionLocation) o) == 0;
}
/**
@ -87,19 +82,18 @@ public class HRegionLocation implements Comparable<HRegionLocation> {
}
/**
*
* @return Immutable HRegionInfo
* @deprecated Since 2.0.0. Will remove in 3.0.0. Use {@link #getRegion()}} instead.
*/
@Deprecated
public HRegionInfo getRegionInfo(){
public HRegionInfo getRegionInfo() {
return regionInfo == null ? null : new ImmutableHRegionInfo(regionInfo);
}
/**
* @return regionInfo
* n
*/
public RegionInfo getRegion(){
public RegionInfo getRegion() {
return regionInfo;
}
@ -116,8 +110,8 @@ public class HRegionLocation implements Comparable<HRegionLocation> {
}
/**
* @return String made of hostname and port formatted as
* per {@link Addressing#createHostAndPortStr(String, int)}
* @return String made of hostname and port formatted as per
* {@link Addressing#createHostAndPortStr(String, int)}
*/
public String getHostnamePort() {
return Addressing.createHostAndPortStr(this.getHostname(), this.getPort());

Some files were not shown because too many files have changed in this diff Show More