HBASE-26899 Run spotless:apply

Closes #4312
This commit is contained in:
Duo Zhang 2022-05-01 22:15:04 +08:00
parent 0edecbf9e0
commit 9c8c9e7fbf
4645 changed files with 110099 additions and 131240 deletions

View File

@ -7,7 +7,7 @@ Release 0.92.1 - Unreleased
BUG FIXES
HBASE-5176 AssignmentManager#getRegion: logging nit adds a redundant '+' (Karthik K)
HBASE-5237 Addendum for HBASE-5160 and HBASE-4397 (Ram)
HBASE-5235 HLogSplitter writer thread's streams not getting closed when any
HBASE-5235 HLogSplitter writer thread's streams not getting closed when any
of the writer threads has exceptions. (Ram)
HBASE-5243 LogSyncerThread not getting shutdown waiting for the interrupted flag (Ram)
HBASE-5255 Use singletons for OperationStatus to save memory (Benoit)
@ -144,7 +144,7 @@ Release 0.92.0 - 01/23/2012
HBASE-3897 Docs (notsoquick guide) suggest invalid XML (Philip Zeyliger)
HBASE-3898 TestSplitTransactionOnCluster broke in TRUNK
HBASE-3826 Minor compaction needs to check if still over
compactionThreshold after compacting (Nicolas Spiegelberg)
compactionThreshold after compacting (Nicolas Spiegelberg)
HBASE-3912 [Stargate] Columns not handle by Scan
HBASE-3903 A successful write to client write-buffer may be lost or not
visible (Doug Meil)
@ -198,7 +198,7 @@ Release 0.92.0 - 01/23/2012
HBASE-4112 Creating table may throw NullPointerException (Jinchao via Ted Yu)
HBASE-4093 When verifyAndAssignRoot throws exception, the deadServers state
cannot be changed (fulin wang via Ted Yu)
HBASE-4118 method regionserver.MemStore#updateColumnValue: the check for
HBASE-4118 method regionserver.MemStore#updateColumnValue: the check for
qualifier and family is missing (N Keywal via Ted Yu)
HBASE-4127 Don't modify table's name away in HBaseAdmin
HBASE-4105 Stargate does not support Content-Type: application/json and
@ -300,7 +300,7 @@ Release 0.92.0 - 01/23/2012
HBASE-4395 EnableTableHandler races with itself
HBASE-4414 Region splits by size not being triggered
HBASE-4322 HBASE-4322 [hbck] Update checkIntegrity/checkRegionChain
to present more accurate region split problem
to present more accurate region split problem
(Jon Hseih)
HBASE-4417 HBaseAdmin.checkHBaseAvailable() doesn't close ZooKeeper connections
(Stefan Seelmann)
@ -483,7 +483,7 @@ Release 0.92.0 - 01/23/2012
HBASE-5100 Rollback of split could cause closed region to be opened again (Chunhui)
HBASE-4397 -ROOT-, .META. tables stay offline for too long in recovery phase after all RSs
are shutdown at the same time (Ming Ma)
HBASE-5094 The META can hold an entry for a region with a different server name from the one
HBASE-5094 The META can hold an entry for a region with a different server name from the one
actually in the AssignmentManager thus making the region inaccessible. (Ram)
HBASE-5081 Distributed log splitting deleteNode races against splitLog retry (Prakash)
HBASE-4357 Region stayed in transition - in closing state (Ming Ma)
@ -517,7 +517,7 @@ Release 0.92.0 - 01/23/2012
HBASE-5105 TestImportTsv failed with hadoop 0.22 (Ming Ma)
IMPROVEMENTS
HBASE-3290 Max Compaction Size (Nicolas Spiegelberg via Stack)
HBASE-3290 Max Compaction Size (Nicolas Spiegelberg via Stack)
HBASE-3292 Expose block cache hit/miss/evict counts into region server
metrics
HBASE-2936 Differentiate between daemon & restart sleep periods
@ -538,7 +538,7 @@ Release 0.92.0 - 01/23/2012
(rpc version 43)
HBASE-3563 [site] Add one-page-only version of hbase doc
HBASE-3564 DemoClient.pl - a demo client in Perl
HBASE-3560 the hbase-default entry of "hbase.defaults.for.version"
HBASE-3560 the hbase-default entry of "hbase.defaults.for.version"
causes tests not to run via not-maven
HBASE-3513 upgrade thrift to 0.5.0 and use mvn version
HBASE-3533 Allow HBASE_LIBRARY_PATH env var to specify extra locations
@ -601,7 +601,7 @@ Release 0.92.0 - 01/23/2012
HBASE-3765 metrics.xml - small format change and adding nav to hbase
book metrics section (Doug Meil)
HBASE-3759 Eliminate use of ThreadLocals for CoprocessorEnvironment
bypass() and complete()
bypass() and complete()
HBASE-3701 revisit ArrayList creation (Ted Yu via Stack)
HBASE-3753 Book.xml - architecture, adding more Store info (Doug Meil)
HBASE-3784 book.xml - adding small subsection in architecture/client on
@ -738,7 +738,7 @@ Release 0.92.0 - 01/23/2012
HBASE-4425 Provide access to RpcServer instance from RegionServerServices
HBASE-4411 When copying tables/CFs, allow CF names to be changed
(David Revell)
HBASE-4424 Provide coprocessors access to createTable() via
HBASE-4424 Provide coprocessors access to createTable() via
MasterServices
HBASE-4432 Enable/Disable off heap cache with config (Li Pi)
HBASE-4434 seek optimization: don't do eager HFile Scanner
@ -1098,7 +1098,7 @@ Release 0.90.3 - May 19th, 2011
HBASE-3846 Set RIT timeout higher
Release 0.90.2 - 20110408
BUG FIXES
HBASE-3545 Possible liveness issue with MasterServerAddress in
HRegionServer getMaster (Greg Bowyer via Stack)
@ -1151,7 +1151,7 @@ Release 0.90.2 - 20110408
HBASE-3654 Weird blocking between getOnlineRegion and createRegionLoad
(Subbu M Iyer via Stack)
HBASE-3666 TestScannerTimeout fails occasionally
HBASE-3497 TableMapReduceUtil.initTableReducerJob broken due to setConf
HBASE-3497 TableMapReduceUtil.initTableReducerJob broken due to setConf
method in TableOutputFormat
HBASE-3686 ClientScanner skips too many rows on recovery if using scanner
caching (Sean Sechrist via Stack)
@ -1159,7 +1159,7 @@ Release 0.90.2 - 20110408
IMPROVEMENTS
HBASE-3542 MultiGet methods in Thrift
HBASE-3586 Improve the selection of regions to balance (Ted Yu via Andrew
Purtell)
Purtell)
HBASE-3603 Remove -XX:+HeapDumpOnOutOfMemoryError autodump of heap option
on OOME
HBASE-3285 Hlog recovery takes too much time
@ -1186,19 +1186,19 @@ Release 0.90.1 - February 9th, 2011
HBASE-3455 Add memstore-local allocation buffers to combat heap
fragmentation in the region server. Experimental / disabled
by default in 0.90.1
BUG FIXES
HBASE-3445 Master crashes on data that was moved from different host
HBASE-3449 Server shutdown handlers deadlocked waiting for META
HBASE-3456 Fix hardcoding of 20 second socket timeout down in HBaseClient
HBASE-3476 HFile -m option need not scan key values
(Prakash Khemani via Lars George)
HBASE-3481 max seq id in flushed file can be larger than its correct value
HBASE-3481 max seq id in flushed file can be larger than its correct value
causing data loss during recovery
HBASE-3493 HMaster sometimes hangs during initialization due to missing
notify call (Bruno Dumon via Stack)
HBASE-3483 Memstore lower limit should trigger asynchronous flushes
HBASE-3494 checkAndPut implementation doesnt verify row param and writable
HBASE-3494 checkAndPut implementation doesnt verify row param and writable
row are the same
HBASE-3416 For intra-row scanning, the update readers notification resets
the query matcher and can lead to incorrect behavior
@ -1288,7 +1288,7 @@ Release 0.90.0 - January 19th, 2011
HBASE-1830 HbaseObjectWritable methods should allow null HBCs
for when Writable is not Configurable (Stack via jgray)
HBASE-1847 Delete latest of a null qualifier when non-null qualifiers
exist throws a RuntimeException
exist throws a RuntimeException
HBASE-1850 src/examples/mapred do not compile after HBASE-1822
HBASE-1853 Each time around the regionserver core loop, we clear the
messages to pass master, even if we failed to deliver them
@ -1343,9 +1343,9 @@ Release 0.90.0 - January 19th, 2011
HBASE-1954 Transactional scans do not see newest put (Clint Morgan via
Stack)
HBASE-1919 code: HRS.delete seems to ignore exceptions it shouldnt
HBASE-1951 Stack overflow when calling HTable.checkAndPut()
HBASE-1951 Stack overflow when calling HTable.checkAndPut()
when deleting a lot of values
HBASE-1781 Weird behavior of WildcardColumnTracker.checkColumn(),
HBASE-1781 Weird behavior of WildcardColumnTracker.checkColumn(),
looks like recursive loop
HBASE-1949 KeyValue expiration by Time-to-Live during major compaction is
broken (Gary Helmling via Stack)
@ -1377,7 +1377,7 @@ Release 0.90.0 - January 19th, 2011
'descendingIterator' (Ching-Shen Chen via Stack)
HBASE-2033 Shell scan 'limit' is off by one
HBASE-2040 Fixes to group commit
HBASE-2047 Example command in the "Getting Started"
HBASE-2047 Example command in the "Getting Started"
documentation doesn't work (Benoit Sigoure via JD)
HBASE-2048 Small inconsistency in the "Example API Usage"
(Benoit Sigoure via JD)
@ -1385,14 +1385,14 @@ Release 0.90.0 - January 19th, 2011
HBASE-1960 Master should wait for DFS to come up when creating
hbase.version
HBASE-2054 memstore size 0 is >= than blocking -2.0g size
HBASE-2064 Cannot disable a table if at the same the Master is moving
HBASE-2064 Cannot disable a table if at the same the Master is moving
its regions around
HBASE-2065 Cannot disable a table if any of its region is opening
HBASE-2065 Cannot disable a table if any of its region is opening
at the same time
HBASE-2026 NPE in StoreScanner on compaction
HBASE-2072 fs.automatic.close isn't passed to FileSystem
HBASE-2075 Master requires HDFS superuser privileges due to waitOnSafeMode
HBASE-2077 NullPointerException with an open scanner that expired causing
HBASE-2077 NullPointerException with an open scanner that expired causing
an immediate region server shutdown (Sam Pullara via JD)
HBASE-2078 Add JMX settings as commented out lines to hbase-env.sh
(Lars George via JD)
@ -1459,11 +1459,11 @@ Release 0.90.0 - January 19th, 2011
HBASE-2258 The WhileMatchFilter doesn't delegate the call to filterRow()
HBASE-2259 StackOverflow in ExplicitColumnTracker when row has many columns
HBASE-2268 [stargate] Failed tests and DEBUG output is dumped to console
since move to Mavenized build
HBASE-2276 Hbase Shell hcd() method is broken by the replication scope
since move to Mavenized build
HBASE-2276 Hbase Shell hcd() method is broken by the replication scope
parameter (Alexey Kovyrin via Lars George)
HBASE-2244 META gets inconsistent in a number of crash scenarios
HBASE-2284 fsWriteLatency metric may be incorrectly reported
HBASE-2284 fsWriteLatency metric may be incorrectly reported
(Kannan Muthukkaruppan via Stack)
HBASE-2063 For hfileoutputformat, on timeout/failure/kill clean up
half-written hfile (Ruslan Salyakhov via Stack)
@ -1478,7 +1478,7 @@ Release 0.90.0 - January 19th, 2011
HBASE-2308 Fix the bin/rename_table.rb script, make it work again
HBASE-2307 hbase-2295 changed hregion size, testheapsize broke... fix it
HBASE-2269 PerformanceEvaluation "--nomapred" may assign duplicate random
seed over multiple testing threads (Tatsuya Kawano via Stack)
seed over multiple testing threads (Tatsuya Kawano via Stack)
HBASE-2287 TypeError in shell (Alexey Kovyrin via Stack)
HBASE-2023 Client sync block can cause 1 thread of a multi-threaded client
to block all others (Karthik Ranganathan via Stack)
@ -1548,10 +1548,10 @@ Release 0.90.0 - January 19th, 2011
HBASE-2544 Forward port branch 0.20 WAL to TRUNK
HBASE-2546 Specify default filesystem in both the new and old way (needed
if we are to run on 0.20 and 0.21 hadoop)
HBASE-1895 HConstants.MAX_ROW_LENGTH is incorrectly 64k, should be 32k
HBASE-1968 Give clients access to the write buffer
HBASE-1895 HConstants.MAX_ROW_LENGTH is incorrectly 64k, should be 32k
HBASE-1968 Give clients access to the write buffer
HBASE-2028 Add HTable.incrementColumnValue support to shell
(Lars George via Andrew Purtell)
(Lars George via Andrew Purtell)
HBASE-2138 unknown metrics type
HBASE-2551 Forward port fixes that are in branch but not in trunk (part of
the merge of old 0.20 into TRUNK task) -- part 1.
@ -1560,7 +1560,7 @@ Release 0.90.0 - January 19th, 2011
HBASE-2344 InfoServer and hence HBase Master doesn't fully start if you
have HADOOP-6151 patch (Kannan Muthukkaruppan via Stack)
HBASE-2382 Don't rely on fs.getDefaultReplication() to roll HLogs
(Nicolas Spiegelberg via Stack)
(Nicolas Spiegelberg via Stack)
HBASE-2415 Disable META splitting in 0.20 (Todd Lipcon via Stack)
HBASE-2421 Put hangs for 10 retries on failed region servers
HBASE-2442 Log lease recovery catches IOException too widely
@ -1617,7 +1617,7 @@ Release 0.90.0 - January 19th, 2011
HBASE-2703 ui not working in distributed context
HBASE-2710 Shell should use default terminal width when autodetection fails
(Kannan Muthukkaruppan via Todd Lipcon)
HBASE-2712 Cached region location that went stale won't recover if
HBASE-2712 Cached region location that went stale won't recover if
asking for first row
HBASE-2732 TestZooKeeper was broken, HBASE-2691 showed it
HBASE-2670 Provide atomicity for readers even when new insert has
@ -1653,7 +1653,7 @@ Release 0.90.0 - January 19th, 2011
HBASE-2772 Scan doesn't recover from region server failure
HBASE-2775 Update of hadoop jar in HBASE-2771 broke TestMultiClusters
HBASE-2774 Spin in ReadWriteConsistencyControl eating CPU (load > 40) and
no progress running YCSB on clean cluster startup
no progress running YCSB on clean cluster startup
HBASE-2785 TestScannerTimeout.test2772 is flaky
HBASE-2787 PE is confused about flushCommits
HBASE-2707 Can't recover from a dead ROOT server if any exceptions happens
@ -1665,18 +1665,18 @@ Release 0.90.0 - January 19th, 2011
HBASE-2797 Another NPE in ReadWriteConsistencyControl
HBASE-2831 Fix '$bin' path duplication in setup scripts
(Nicolas Spiegelberg via Stack)
HBASE-2781 ZKW.createUnassignedRegion doesn't make sure existing znode is
HBASE-2781 ZKW.createUnassignedRegion doesn't make sure existing znode is
in the right state (Karthik Ranganathan via JD)
HBASE-2727 Splits writing one file only is untenable; need dir of recovered
edits ordered by sequenceid
HBASE-2843 Readd bloomfilter test over zealously removed by HBASE-2625
HBASE-2843 Readd bloomfilter test over zealously removed by HBASE-2625
HBASE-2846 Make rest server be same as thrift and avro servers
HBASE-1511 Pseudo distributed mode in LocalHBaseCluster
(Nicolas Spiegelberg via Stack)
HBASE-2851 Remove testDynamicBloom() unit test
(Nicolas Spiegelberg via Stack)
HBASE-2853 TestLoadIncrementalHFiles fails on TRUNK
HBASE-2854 broken tests on trunk
HBASE-2854 broken tests on trunk
HBASE-2859 Cleanup deprecated stuff in TestHLog (Alex Newman via Stack)
HBASE-2858 TestReplication.queueFailover fails half the time
HBASE-2863 HBASE-2553 removed an important edge case
@ -1789,7 +1789,7 @@ Release 0.90.0 - January 19th, 2011
HBASE-3064 Long sleeping in HConnectionManager after thread is interrupted
(Bruno Dumon via Stack)
HBASE-2753 Remove sorted() methods from Result now that Gets are Scans
HBASE-3059 TestReadWriteConsistencyControl occasionally hangs (Hairong
HBASE-3059 TestReadWriteConsistencyControl occasionally hangs (Hairong
via Ryan)
HBASE-2906 [rest/stargate] URI decoding in RowResource
HBASE-3008 Memstore.updateColumnValue passes wrong flag to heapSizeChange
@ -1820,7 +1820,7 @@ Release 0.90.0 - January 19th, 2011
HBASE-3121 [rest] Do not perform cache control when returning results
HBASE-2669 HCM.shutdownHook causes data loss with
hbase.client.write.buffer != 0
HBASE-2985 HRegionServer.multi() no longer calls HRegion.put(List) when
HBASE-2985 HRegionServer.multi() no longer calls HRegion.put(List) when
possible
HBASE-3031 CopyTable MR job named "Copy Table" in Driver
HBASE-2658 REST (stargate) TableRegionModel Regions need to be updated to
@ -1891,7 +1891,7 @@ Release 0.90.0 - January 19th, 2011
HBASE-3199 large response handling: some fixups and cleanups
HBASE-3212 More testing of enable/disable uncovered base condition not in
place; i.e. that only one enable/disable runs at a time
HBASE-2898 MultiPut makes proper error handling impossible and leads to
HBASE-2898 MultiPut makes proper error handling impossible and leads to
corrupted data
HBASE-3213 If do abort of backup master will get NPE instead of graceful
abort
@ -1904,7 +1904,7 @@ Release 0.90.0 - January 19th, 2011
HBASE-3224 NPE in KeyValue$KVComparator.compare when compacting
HBASE-3233 Fix Long Running Stats
HBASE-3232 Fix KeyOnlyFilter + Add Value Length (Nicolas via Ryan)
HBASE-3235 Intermittent incrementColumnValue failure in TestHRegion
HBASE-3235 Intermittent incrementColumnValue failure in TestHRegion
(Gary via Ryan)
HBASE-3241 check to see if we exceeded hbase.regionserver.maxlogs limit is
incorrect (Kannan Muthukkaruppan via JD)
@ -1955,7 +1955,7 @@ Release 0.90.0 - January 19th, 2011
HBASE-3352 enabling a non-existent table from shell prints no error
HBASE-3353 table.jsp doesn't handle entries in META without server info
HBASE-3351 ReplicationZookeeper goes to ZK every time a znode is modified
HBASE-3326 Replication state's znode should be created else it
HBASE-3326 Replication state's znode should be created else it
defaults to false
HBASE-3355 Stopping a stopped cluster leaks an HMaster
HBASE-3356 Add more checks in replication if RS is stopped
@ -2060,8 +2060,8 @@ Release 0.90.0 - January 19th, 2011
HBASE-1942 Update hadoop jars in trunk; update to r831142
HBASE-1943 Remove AgileJSON; unused
HBASE-1944 Add a "deferred log flush" attribute to HTD
HBASE-1945 Remove META and ROOT memcache size bandaid
HBASE-1947 If HBase starts/stops often in less than 24 hours,
HBASE-1945 Remove META and ROOT memcache size bandaid
HBASE-1947 If HBase starts/stops often in less than 24 hours,
you end up with lots of store files
HBASE-1829 Make use of start/stop row in TableInputFormat
(Lars George via Stack)
@ -2109,7 +2109,7 @@ Release 0.90.0 - January 19th, 2011
Stack)
HBASE-2076 Many javadoc warnings
HBASE-2068 MetricsRate is missing "registry" parameter (Lars George via JD)
HBASE-2025 0.20.2 accessed from older client throws
HBASE-2025 0.20.2 accessed from older client throws
UndeclaredThrowableException; frustrates rolling upgrade
HBASE-2081 Set the retries higher in shell since client pause is lower
HBASE-1956 Export HDFS read and write latency as a metric
@ -2131,7 +2131,7 @@ Release 0.90.0 - January 19th, 2011
./bin/start-hbase.sh in a checkout
HBASE-2136 Forward-port the old mapred package
HBASE-2133 Increase default number of client handlers
HBASE-2109 status 'simple' should show total requests per second, also
HBASE-2109 status 'simple' should show total requests per second, also
the requests/sec is wrong as is
HBASE-2151 Remove onelab and include generated thrift classes in javadoc
(Lars Francke via Stack)
@ -2170,9 +2170,9 @@ Release 0.90.0 - January 19th, 2011
HBASE-2250 typo in the maven pom
HBASE-2254 Improvements to the Maven POMs (Lars Francke via Stack)
HBASE-2262 ZKW.ensureExists should check for existence
HBASE-2264 Adjust the contrib apps to the Maven project layout
HBASE-2264 Adjust the contrib apps to the Maven project layout
(Lars Francke via Lars George)
HBASE-2245 Unnecessary call to syncWal(region); in HRegionServer
HBASE-2245 Unnecessary call to syncWal(region); in HRegionServer
(Benoit Sigoure via JD)
HBASE-2246 Add a getConfiguration method to HTableInterface
(Benoit Sigoure via JD)
@ -2180,10 +2180,10 @@ Release 0.90.0 - January 19th, 2011
development (Alexey Kovyrin via Stack)
HBASE-2267 More improvements to the Maven build (Lars Francke via Stack)
HBASE-2174 Stop from resolving HRegionServer addresses to names using DNS
on every heartbeat (Karthik Ranganathan via Stack)
on every heartbeat (Karthik Ranganathan via Stack)
HBASE-2302 Optimize M-R by bulk excluding regions - less InputSplit-s to
avoid traffic on region servers when performing M-R on a subset
of the table (Kay Kay via Stack)
of the table (Kay Kay via Stack)
HBASE-2309 Add apache releases to pom (list of ) repositories
(Kay Kay via Stack)
HBASE-2279 Hbase Shell does not have any tests (Alexey Kovyrin via Stack)
@ -2209,15 +2209,15 @@ Release 0.90.0 - January 19th, 2011
HBASE-2374 TableInputFormat - Configurable parameter to add column families
(Kay Kay via Stack)
HBASE-2388 Give a very explicit message when we figure a big GC pause
HBASE-2270 Improve how we handle recursive calls in ExplicitColumnTracker
HBASE-2270 Improve how we handle recursive calls in ExplicitColumnTracker
and WildcardColumnTracker
HBASE-2402 [stargate] set maxVersions on gets
HBASE-2087 The wait on compaction because "Too many store files"
HBASE-2087 The wait on compaction because "Too many store files"
holds up all flushing
HBASE-2252 Mapping a very big table kills region servers
HBASE-2412 [stargate] PerformanceEvaluation
HBASE-2419 Remove from RS logs the fat NotServingRegionException stack
HBASE-2286 [Transactional Contrib] Correctly handle or avoid cases where
HBASE-2286 [Transactional Contrib] Correctly handle or avoid cases where
writes occur in same millisecond (Clint Morgan via J-D)
HBASE-2360 Make sure we have all the hadoop fixes in our our copy of its rpc
(Todd Lipcon via Stack)
@ -2251,7 +2251,7 @@ Release 0.90.0 - January 19th, 2011
(Todd Lipcon via Stack)
HBASE-2547 [mvn] assembly:assembly does not include hbase-X.X.X-test.jar
(Paul Smith via Stack)
HBASE-2037 The core elements of HBASE-2037: refactoring flushing, and adding
HBASE-2037 The core elements of HBASE-2037: refactoring flushing, and adding
configurability in which HRegion subclass is instantiated
HBASE-2248 Provide new non-copy mechanism to assure atomic reads in get and scan
HBASE-2523 Add check for licenses before rolling an RC, add to
@ -2264,7 +2264,7 @@ Release 0.90.0 - January 19th, 2011
HBASE-2520 Cleanup arrays vs Lists of scanners (Todd Lipcon via Stack)
HBASE-2551 Forward port fixes that are in branch but not in trunk (part
of the merge of old 0.20 into TRUNK task)
HBASE-2466 Improving filter API to allow for modification of keyvalue list
HBASE-2466 Improving filter API to allow for modification of keyvalue list
by filter (Juhani Connolly via Ryan)
HBASE-2566 Remove 'lib' dir; it only has libthrift and that is being
pulled from http://people.apache.org/~rawson/repo/....
@ -2289,13 +2289,13 @@ Release 0.90.0 - January 19th, 2011
failing hudson on occasion)
HBASE-2651 Allow alternate column separators to be specified for ImportTsv
HBASE-2661 Add test case for row atomicity guarantee
HBASE-2578 Add ability for tests to override server-side timestamp
HBASE-2578 Add ability for tests to override server-side timestamp
setting (currentTimeMillis) (Daniel Ploeg via Ryan Rawson)
HBASE-2558 Our javadoc overview -- "Getting Started", requirements, etc. --
is not carried across by mvn javadoc:javadoc target
HBASE-2618 Don't inherit from HConstants (Benoit Sigoure via Stack)
HBASE-2208 TableServers # processBatchOfRows - converts from List to [ ]
- Expensive copy
- Expensive copy
HBASE-2694 Move RS to Master region open/close messaging into ZooKeeper
HBASE-2716 Make HBase's maven artifacts configurable with -D
(Alex Newman via Stack)
@ -2308,7 +2308,7 @@ Release 0.90.0 - January 19th, 2011
message
HBASE-2724 Update to new release of Guava library
HBASE-2735 Make HBASE-2694 replication-friendly
HBASE-2683 Make it obvious in the documentation that ZooKeeper needs
HBASE-2683 Make it obvious in the documentation that ZooKeeper needs
permanent storage
HBASE-2764 Force all Chore tasks to have a thread name
HBASE-2762 Add warning to master if running without append enabled
@ -2319,7 +2319,7 @@ Release 0.90.0 - January 19th, 2011
(Nicolas Spiegelberg via JD)
HBASE-2786 TestHLog.testSplit hangs (Nicolas Spiegelberg via JD)
HBASE-2790 Purge apache-forrest from TRUNK
HBASE-2793 Add ability to extract a specified list of versions of a column
HBASE-2793 Add ability to extract a specified list of versions of a column
in a single roundtrip (Kannan via Ryan)
HBASE-2828 HTable unnecessarily coupled with HMaster
(Nicolas Spiegelberg via Stack)
@ -2331,7 +2331,7 @@ Release 0.90.0 - January 19th, 2011
next column (Pranav via jgray)
HBASE-2835 Update hadoop jar to head of branch-0.20-append to catch three
added patches
HBASE-2840 Remove the final remnants of the old Get code - the query matchers
HBASE-2840 Remove the final remnants of the old Get code - the query matchers
and other helper classes
HBASE-2845 Small edit of shell main help page cutting down some on white
space and text
@ -2360,9 +2360,9 @@ Release 0.90.0 - January 19th, 2011
HBASE-1517 Implement inexpensive seek operations in HFile (Pranav via Ryan)
HBASE-2903 ColumnPrefix filtering (Pranav via Ryan)
HBASE-2904 Smart seeking using filters (Pranav via Ryan)
HBASE-2922 HLog preparation and cleanup are done under the updateLock,
HBASE-2922 HLog preparation and cleanup are done under the updateLock,
major slowdown
HBASE-1845 MultiGet, MultiDelete, and MultiPut - batched to the
HBASE-1845 MultiGet, MultiDelete, and MultiPut - batched to the
appropriate region servers (Marc Limotte via Ryan)
HBASE-2867 Have master show its address using hostname rather than IP
HBASE-2696 ZooKeeper cleanup and refactor
@ -2375,7 +2375,7 @@ Release 0.90.0 - January 19th, 2011
HBASE-2857 HBaseAdmin.tableExists() should not require a full meta scan
HBASE-2962 Add missing methods to HTableInterface (and HTable)
(Lars Francke via Stack)
HBASE-2942 Custom filters should not require registration in
HBASE-2942 Custom filters should not require registration in
HBaseObjectWritable (Gary Helmling via Andrew Purtell)
HBASE-2976 Running HFile tool passing fully-qualified filename I get
'IllegalArgumentException: Wrong FS'
@ -2417,7 +2417,7 @@ Release 0.90.0 - January 19th, 2011
HBASE-3133 Only log compaction requests when a request is actually added
to the queue
HBASE-3132 Print TimestampRange and BloomFilters in HFile pretty print
HBASE-2514 RegionServer should refuse to be assigned a region that use
HBASE-2514 RegionServer should refuse to be assigned a region that use
LZO when LZO isn't available
HBASE-3082 For ICV gets, first look in MemStore before reading StoreFiles
(prakash via jgray)
@ -2548,7 +2548,7 @@ Release 0.90.0 - January 19th, 2011
HBASE-410 [testing] Speed up the test suite
HBASE-2041 Change WAL default configuration values
HBASE-2997 Performance fixes - profiler driven
HBASE-2450 For single row reads of specific columns, seek to the
HBASE-2450 For single row reads of specific columns, seek to the
first column in HFiles rather than start of row
(Pranav via Ryan, some Ryan)
@ -2615,8 +2615,8 @@ Release 0.20.0 - Tue Sep 8 12:53:05 PDT 2009
HBASE-1243 oldlogfile.dat is screwed, so is it's region
HBASE-1169 When a shutdown is requested, stop scanning META regions
immediately
HBASE-1251 HConnectionManager.getConnection(HBaseConfiguration) returns
same HConnection for different HBaseConfigurations
HBASE-1251 HConnectionManager.getConnection(HBaseConfiguration) returns
same HConnection for different HBaseConfigurations
HBASE-1157, HBASE-1156 If we do not take start code as a part of region
server recovery, we could inadvertantly try to reassign regions
assigned to a restarted server with a different start code;
@ -2675,7 +2675,7 @@ Release 0.20.0 - Tue Sep 8 12:53:05 PDT 2009
(Thomas Schneider via Andrew Purtell)
HBASE-1374 NPE out of ZooKeeperWrapper.loadZooKeeperConfig
HBASE-1336 Splitting up the compare of family+column into 2 different
compare
compare
HBASE-1377 RS address is null in master web UI
HBASE-1344 WARN IllegalStateException: Cannot set a region as open if it
has not been pending
@ -2737,7 +2737,7 @@ Release 0.20.0 - Tue Sep 8 12:53:05 PDT 2009
binary comparator (Jon Gray via Stack)
HBASE-1500 KeyValue$KeyComparator array overrun
HBASE-1513 Compactions too slow
HBASE-1516 Investigate if StoreScanner will not return the next row if
HBASE-1516 Investigate if StoreScanner will not return the next row if
earlied-out of previous row (Jon Gray)
HBASE-1520 StoreFileScanner catches and ignore IOExceptions from HFile
HBASE-1522 We delete splits before their time occasionally
@ -2848,7 +2848,7 @@ Release 0.20.0 - Tue Sep 8 12:53:05 PDT 2009
when trying to read
HBASE-1705 Thrift server: deletes in mutateRow/s don't delete
(Tim Sell and Ryan Rawson via Stack)
HBASE-1703 ICVs across /during a flush can cause multiple keys with the
HBASE-1703 ICVs across /during a flush can cause multiple keys with the
same TS (bad)
HBASE-1671 HBASE-1609 broke scanners riding across splits
HBASE-1717 Put on client-side uses passed-in byte[]s rather than always
@ -2921,9 +2921,9 @@ Release 0.20.0 - Tue Sep 8 12:53:05 PDT 2009
(Toby White via Andrew Purtell)
HBASE-1180 Add missing import statements to SampleUploader and remove
unnecessary @Overrides (Ryan Smith via Andrew Purtell)
HBASE-1191 ZooKeeper ensureParentExists calls fail
HBASE-1191 ZooKeeper ensureParentExists calls fail
on absolute path (Nitay Joffe via Jean-Daniel Cryans)
HBASE-1187 After disabling/enabling a table, the regions seems to
HBASE-1187 After disabling/enabling a table, the regions seems to
be assigned to only 1-2 region servers
HBASE-1210 Allow truncation of output for scan and get commands in shell
(Lars George via Stack)
@ -2955,7 +2955,7 @@ Release 0.20.0 - Tue Sep 8 12:53:05 PDT 2009
(Nitay Joffe via Stack)
HBASE-1285 Forcing compactions should be available via thrift
(Tim Sell via Stack)
HBASE-1186 Memory-aware Maps with LRU eviction for cell cache
HBASE-1186 Memory-aware Maps with LRU eviction for cell cache
(Jonathan Gray via Andrew Purtell)
HBASE-1205 RegionServers should find new master when a new master comes up
(Nitay Joffe via Andrew Purtell)
@ -3033,7 +3033,7 @@ Release 0.20.0 - Tue Sep 8 12:53:05 PDT 2009
HBASE-1466 Binary keys are not first class citizens
(Ryan Rawson via Stack)
HBASE-1445 Add the ability to start a master from any machine
HBASE-1474 Add zk attributes to list of attributes
HBASE-1474 Add zk attributes to list of attributes
in master and regionserver UIs
HBASE-1448 Add a node in ZK to tell all masters to shutdown
HBASE-1478 Remove hbase master options from shell (Nitay Joffe via Stack)
@ -3042,7 +3042,7 @@ Release 0.20.0 - Tue Sep 8 12:53:05 PDT 2009
HBASE-1490 Update ZooKeeper library
HBASE-1489 Basic git ignores for people who use git and eclipse
HBASE-1453 Add HADOOP-4681 to our bundled hadoop, add to 'gettting started'
recommendation that hbase users backport
recommendation that hbase users backport
HBASE-1507 iCMS as default JVM
HBASE-1509 Add explanation to shell "help" command on how to use binarykeys
(Lars George via Stack)
@ -3054,7 +3054,7 @@ Release 0.20.0 - Tue Sep 8 12:53:05 PDT 2009
on hbase-user traffic
HBASE-1539 prevent aborts due to missing zoo.cfg
HBASE-1488 Fix TestThriftServer and re-enable it
HBASE-1541 Scanning multiple column families in the presence of deleted
HBASE-1541 Scanning multiple column families in the presence of deleted
families results in bad scans
HBASE-1540 Client delete unit test, define behavior
(Jonathan Gray via Stack)
@ -3161,13 +3161,13 @@ Release 0.19.0 - 01/21/2009
HBASE-906 [shell] Truncates output
HBASE-912 PE is broken when other tables exist
HBASE-853 [shell] Cannot describe meta tables (Izaak Rubin via Stack)
HBASE-844 Can't pass script to hbase shell
HBASE-844 Can't pass script to hbase shell
HBASE-837 Add unit tests for ThriftServer.HBaseHandler (Izaak Rubin via
Stack)
HBASE-913 Classes using log4j directly
HBASE-914 MSG_REPORT_CLOSE has a byte array for a message
HBASE-918 Region balancing during startup makes cluster unstable
HBASE-921 region close and open processed out of order; makes for
HBASE-921 region close and open processed out of order; makes for
disagreement between master and regionserver on region state
HBASE-925 HRS NPE on way out if no master to connect to
HBASE-928 NPE throwing RetriesExhaustedException
@ -3277,7 +3277,7 @@ Release 0.19.0 - 01/21/2009
crashed server; regionserver tries to execute incomplete log
HBASE-1104, HBASE-1098, HBASE-1096: Doubly-assigned regions redux,
IllegalStateException: Cannot set a region to be closed it it was
not already marked as closing, Does not recover if HRS carrying
not already marked as closing, Does not recover if HRS carrying
-ROOT- goes down
HBASE-1114 Weird NPEs compacting
HBASE-1116 generated web.xml and svn don't play nice together
@ -3320,7 +3320,7 @@ Release 0.19.0 - 01/21/2009
HBASE-949 Add an HBase Manual
HBASE-839 Update hadoop libs in hbase; move hbase TRUNK on to an hadoop
0.19.0 RC
HBASE-785 Remove InfoServer, use HADOOP-3824 StatusHttpServer
HBASE-785 Remove InfoServer, use HADOOP-3824 StatusHttpServer
instead (requires hadoop 0.19)
HBASE-81 When a scanner lease times out, throw a more "user friendly" exception
HBASE-978 Remove BloomFilterDescriptor. It is no longer used.
@ -3396,7 +3396,7 @@ Release 0.18.0 - September 21st, 2008
BUG FIXES
HBASE-881 Fixed bug when Master tries to reassign split or offline regions
from a dead server
HBASE-860 Fixed Bug in IndexTableReduce where it concerns writing lucene
HBASE-860 Fixed Bug in IndexTableReduce where it concerns writing lucene
index fields.
HBASE-805 Remove unnecessary getRow overloads in HRS (Jonathan Gray via
Jim Kellerman) (Fix whitespace diffs in HRegionServer)
@ -3504,8 +3504,8 @@ Release 0.2.0 - August 8, 2008.
HBASE-487 Replace hql w/ a hbase-friendly jirb or jython shell
Part 1: purge of hql and added raw jirb in its place.
HBASE-521 Improve client scanner interface
HBASE-288 Add in-memory caching of data. Required update of hadoop to
0.17.0-dev.2008-02-07_12-01-58. (Tom White via Stack)
HBASE-288 Add in-memory caching of data. Required update of hadoop to
0.17.0-dev.2008-02-07_12-01-58. (Tom White via Stack)
HBASE-696 Make bloomfilter true/false and self-sizing
HBASE-720 clean up inconsistencies around deletes (Izaak Rubin via Stack)
HBASE-796 Deprecates Text methods from HTable
@ -3577,7 +3577,7 @@ Release 0.2.0 - August 8, 2008.
HBASE-715 Base HBase 0.2 on Hadoop 0.17.1
HBASE-718 hbase shell help info
HBASE-717 alter table broke with new shell returns InvalidColumnNameException
HBASE-573 HBase does not read hadoop-*.xml for dfs configuration after
HBASE-573 HBase does not read hadoop-*.xml for dfs configuration after
moving out hadoop/contrib
HBASE-11 Unexpected exits corrupt DFS
HBASE-12 When hbase regionserver restarts, it says "impossible state for
@ -3632,7 +3632,7 @@ Release 0.2.0 - August 8, 2008.
HBASE-8 Delete table does not remove the table directory in the FS
HBASE-428 Under continuous upload of rows, WrongRegionExceptions are thrown
that reach the client even after retries
HBASE-460 TestMigrate broken when HBase moved to subproject
HBASE-460 TestMigrate broken when HBase moved to subproject
HBASE-462 Update migration tool
HBASE-473 When a table is deleted, master sends multiple close messages to
the region server
@ -3656,7 +3656,7 @@ Release 0.2.0 - August 8, 2008.
HBASE-537 Wait for hdfs to exit safe mode
HBASE-476 RegexpRowFilter behaves incorectly when there are multiple store
files (Clint Morgan via Jim Kellerman)
HBASE-527 RegexpRowFilter does not work when there are columns from
HBASE-527 RegexpRowFilter does not work when there are columns from
multiple families (Clint Morgan via Jim Kellerman)
HBASE-534 Double-assignment at SPLIT-time
HBASE-712 midKey found compacting is the first, not necessarily the optimal
@ -3721,13 +3721,13 @@ Release 0.2.0 - August 8, 2008.
HBASE-790 During import, single region blocks requests for >10 minutes,
thread dumps, throws out pending requests, and continues
(Jonathan Gray via Stack)
IMPROVEMENTS
HBASE-559 MR example job to count table rows
HBASE-596 DemoClient.py (Ivan Begtin via Stack)
HBASE-581 Allow adding filters to TableInputFormat (At same time, ensure TIF
is subclassable) (David Alves via Stack)
HBASE-603 When an exception bubbles out of getRegionServerWithRetries, wrap
HBASE-603 When an exception bubbles out of getRegionServerWithRetries, wrap
the exception with a RetriesExhaustedException
HBASE-600 Filters have excessive DEBUG logging
HBASE-611 regionserver should do basic health check before reporting
@ -3789,7 +3789,7 @@ Release 0.2.0 - August 8, 2008.
HMaster (Bryan Duxbury via Stack)
HBASE-440 Add optional log roll interval so that log files are garbage
collected
HBASE-407 Keep HRegionLocation information in LRU structure
HBASE-407 Keep HRegionLocation information in LRU structure
HBASE-444 hbase is very slow at determining table is not present
HBASE-438 XMLOutputter state should be initialized.
HBASE-414 Move client classes into client package
@ -3801,7 +3801,7 @@ Release 0.2.0 - August 8, 2008.
HBASE-464 HBASE-419 introduced javadoc errors
HBASE-468 Move HStoreKey back to o.a.h.h
HBASE-442 Move internal classes out of HRegionServer
HBASE-466 Move HMasterInterface, HRegionInterface, and
HBASE-466 Move HMasterInterface, HRegionInterface, and
HMasterRegionInterface into o.a.h.h.ipc
HBASE-479 Speed up TestLogRolling
HBASE-480 Tool to manually merge two regions
@ -3851,7 +3851,7 @@ Release 0.2.0 - August 8, 2008.
timestamps
HBASE-511 Do exponential backoff in clients on NSRE, WRE, ISE, etc.
(Andrew Purtell via Jim Kellerman)
OPTIMIZATIONS
HBASE-430 Performance: Scanners and getRow return maps with duplicate data
@ -3867,7 +3867,7 @@ Release 0.1.3 - 07/25/2008
HBASE-648 If mapfile index is empty, run repair
HBASE-659 HLog#cacheFlushLock not cleared; hangs a region
HBASE-663 Incorrect sequence number for cache flush
HBASE-652 Dropping table fails silently if table isn't disabled
HBASE-652 Dropping table fails silently if table isn't disabled
HBASE-674 Memcache size unreliable
HBASE-665 server side scanner doesn't honor stop row
HBASE-681 NPE in Memcache (Clint Morgan via Jim Kellerman)
@ -3918,7 +3918,7 @@ Release 0.1.2 - 05/13/2008
HBASE-618 We always compact if 2 files, regardless of the compaction threshold setting
HBASE-619 Fix 'logs' link in UI
HBASE-620 testmergetool failing in branch and trunk since hbase-618 went in
IMPROVEMENTS
HBASE-559 MR example job to count table rows
HBASE-578 Upgrade branch to 0.16.3 hadoop.
@ -3952,7 +3952,7 @@ Release 0.1.1 - 04/11/2008
Release 0.1.0
INCOMPATIBLE CHANGES
HADOOP-2750 Deprecated methods startBatchUpdate, commitBatch, abortBatch,
HADOOP-2750 Deprecated methods startBatchUpdate, commitBatch, abortBatch,
and renewLease have been removed from HTable (Bryan Duxbury via
Jim Kellerman)
HADOOP-2786 Move hbase out of hadoop core
@ -3961,7 +3961,7 @@ Release 0.1.0
with a hbase from 0.16.0
NEW FEATURES
HBASE-506 When an exception has to escape ServerCallable due to exhausted retries,
HBASE-506 When an exception has to escape ServerCallable due to exhausted retries,
show all the exceptions that lead to this situation
OPTIMIZATIONS
@ -3997,7 +3997,7 @@ Release 0.1.0
HBASE-514 table 'does not exist' when it does
HBASE-537 Wait for hdfs to exit safe mode
HBASE-534 Double-assignment at SPLIT-time
IMPROVEMENTS
HADOOP-2555 Refactor the HTable#get and HTable#getRow methods to avoid
repetition of retry-on-failure logic (thanks to Peter Dolan and
@ -4006,22 +4006,22 @@ Release 0.1.0
HBASE-480 Tool to manually merge two regions
HBASE-477 Add support for an HBASE_CLASSPATH
HBASE-515 At least double default timeouts between regionserver and master
HBASE-482 package-level javadoc should have example client or at least
HBASE-482 package-level javadoc should have example client or at least
point at the FAQ
HBASE-497 RegionServer needs to recover if datanode goes down
HBASE-456 Clearly state which ports need to be opened in order to run HBase
HBASE-483 Merge tool won't merge two overlapping regions
HBASE-476 RegexpRowFilter behaves incorectly when there are multiple store
files (Clint Morgan via Jim Kellerman)
HBASE-527 RegexpRowFilter does not work when there are columns from
HBASE-527 RegexpRowFilter does not work when there are columns from
multiple families (Clint Morgan via Jim Kellerman)
Release 0.16.0
2008/02/04 HBase is now a subproject of Hadoop. The first HBase release as
a subproject will be release 0.1.0 which will be equivalent to
the version of HBase included in Hadoop 0.16.0. In order to
accomplish this, the HBase portion of HBASE-288 (formerly
accomplish this, the HBase portion of HBASE-288 (formerly
HADOOP-1398) has been backed out. Once 0.1.0 is frozen (depending
mostly on changes to infrastructure due to becoming a sub project
instead of a contrib project), this patch will re-appear on HBase
@ -4030,7 +4030,7 @@ Release 0.16.0
INCOMPATIBLE CHANGES
HADOOP-2056 A table with row keys containing colon fails to split regions
HADOOP-2079 Fix generated HLog, HRegion names
HADOOP-2495 Minor performance improvements: Slim-down BatchOperation, etc.
HADOOP-2495 Minor performance improvements: Slim-down BatchOperation, etc.
HADOOP-2506 Remove the algebra package
HADOOP-2519 Performance improvements: Customized RPC serialization
HADOOP-2478 Restructure how HBase lays out files in the file system (phase 1)
@ -4155,7 +4155,7 @@ Release 0.16.0
TableNotFoundException when a different table has been created
previously (Bryan Duxbury via Stack)
HADOOP-2587 Splits blocked by compactions cause region to be offline for
duration of compaction.
duration of compaction.
HADOOP-2592 Scanning, a region can let out a row that its not supposed
to have
HADOOP-2493 hbase will split on row when the start and end row is the
@ -4188,7 +4188,7 @@ Release 0.16.0
table or table you are enumerating isn't the first table
Delete empty file: src/contrib/hbase/src/java/org/apache/hadoop/hbase/mapred/
TableOutputCollector.java per Nigel Daley
IMPROVEMENTS
HADOOP-2401 Add convenience put method that takes writable
(Johan Oskarsson via Stack)
@ -4230,7 +4230,7 @@ Release 0.16.0
HADOOP-2351 If select command returns no result, it doesn't need to show the
header information (Edward Yoon via Stack)
HADOOP-2285 Add being able to shutdown regionservers (Dennis Kubes via Stack)
HADOOP-2458 HStoreFile.writeSplitInfo should just call
HADOOP-2458 HStoreFile.writeSplitInfo should just call
HStoreFile.Reference.write
HADOOP-2471 Add reading/writing MapFile to PerformanceEvaluation suite
HADOOP-2522 Separate MapFile benchmark from PerformanceEvaluation
@ -4250,7 +4250,7 @@ Release 0.16.0
HADOOP-2616 hbase not spliting when the total size of region reaches max
region size * 1.5
HADOOP-2643 Make migration tool smarter.
Release 0.15.1
Branch 0.15
@ -4318,9 +4318,9 @@ Branch 0.15
HADOOP-1975 HBase tests failing with java.lang.NumberFormatException
HADOOP-1990 Regression test instability affects nightly and patch builds
HADOOP-1996 TestHStoreFile fails on windows if run multiple times
HADOOP-1937 When the master times out a region server's lease, it is too
HADOOP-1937 When the master times out a region server's lease, it is too
aggressive in reclaiming the server's log.
HADOOP-2004 webapp hql formatting bugs
HADOOP-2004 webapp hql formatting bugs
HADOOP_2011 Make hbase daemon scripts take args in same order as hadoop
daemon scripts
HADOOP-2017 TestRegionServerAbort failure in patch build #903 and
@ -4339,7 +4339,7 @@ Branch 0.15
HADOOP-1794 Remove deprecated APIs
HADOOP-1802 Startup scripts should wait until hdfs as cleared 'safe mode'
HADOOP-1833 bin/stop_hbase.sh returns before it completes
(Izaak Rubin via Stack)
(Izaak Rubin via Stack)
HADOOP-1835 Updated Documentation for HBase setup/installation
(Izaak Rubin via Stack)
HADOOP-1868 Make default configuration more responsive
@ -4358,13 +4358,13 @@ Below are the list of changes before 2007-08-18
1. HADOOP-1384. HBase omnibus patch. (jimk, Vuk Ercegovac, and Michael Stack)
2. HADOOP-1402. Fix javadoc warnings in hbase contrib. (Michael Stack)
3. HADOOP-1404. HBase command-line shutdown failing (Michael Stack)
4. HADOOP-1397. Replace custom hbase locking with
4. HADOOP-1397. Replace custom hbase locking with
java.util.concurrent.locks.ReentrantLock (Michael Stack)
5. HADOOP-1403. HBase reliability - make master and region server more fault
tolerant.
6. HADOOP-1418. HBase miscellaneous: unit test for HClient, client to do
'Performance Evaluation', etc.
7. HADOOP-1420, HADOOP-1423. Findbugs changes, remove reference to removed
7. HADOOP-1420, HADOOP-1423. Findbugs changes, remove reference to removed
class HLocking.
8. HADOOP-1424. TestHBaseCluster fails with IllegalMonitorStateException. Fix
regression introduced by HADOOP-1397.
@ -4378,7 +4378,7 @@ Below are the list of changes before 2007-08-18
14. HADOOP-1460 On shutdown IOException with complaint 'Cannot cancel lease
that is not held'
15. HADOOP-1421 Failover detection, split log files.
For the files modified, also clean up javadoc, class, field and method
For the files modified, also clean up javadoc, class, field and method
visibility (HADOOP-1466)
16. HADOOP-1479 Fix NPE in HStore#get if store file only has keys < passed key.
17. HADOOP-1476 Distributed version of 'Performance Evaluation' script
@ -4397,13 +4397,13 @@ Below are the list of changes before 2007-08-18
26. HADOOP-1543 [hbase] Add HClient.tableExists
27. HADOOP-1519 [hbase] map/reduce interface for HBase. (Vuk Ercegovac and
Jim Kellerman)
28. HADOOP-1523 Hung region server waiting on write locks
28. HADOOP-1523 Hung region server waiting on write locks
29. HADOOP-1560 NPE in MiniHBaseCluster on Windows
30. HADOOP-1531 Add RowFilter to HRegion.HScanner
Adds a row filtering interface and two implemenentations: A page scanner,
and a regex row/column-data matcher. (James Kennedy via Stack)
31. HADOOP-1566 Key-making utility
32. HADOOP-1415 Provide configurable per-column bloom filters.
32. HADOOP-1415 Provide configurable per-column bloom filters.
HADOOP-1466 Clean up visibility and javadoc issues in HBase.
33. HADOOP-1538 Provide capability for client specified time stamps in HBase
HADOOP-1466 Clean up visibility and javadoc issues in HBase.
@ -4417,7 +4417,7 @@ Below are the list of changes before 2007-08-18
41. HADOOP-1614 [hbase] HClient does not protect itself from simultaneous updates
42. HADOOP-1468 Add HBase batch update to reduce RPC overhead
43. HADOOP-1616 Sporadic TestTable failures
44. HADOOP-1615 Replacing thread notification-based queue with
44. HADOOP-1615 Replacing thread notification-based queue with
java.util.concurrent.BlockingQueue in HMaster, HRegionServer
45. HADOOP-1606 Updated implementation of RowFilterSet, RowFilterInterface
(Izaak Rubin via Stack)
@ -4438,10 +4438,10 @@ Below are the list of changes before 2007-08-18
53. HADOOP-1528 HClient for multiple tables - expose close table function
54. HADOOP-1466 Clean up warnings, visibility and javadoc issues in HBase.
55. HADOOP-1662 Make region splits faster
56. HADOOP-1678 On region split, master should designate which host should
56. HADOOP-1678 On region split, master should designate which host should
serve daughter splits. Phase 1: Master balances load for new regions and
when a region server fails.
57. HADOOP-1678 On region split, master should designate which host should
57. HADOOP-1678 On region split, master should designate which host should
serve daughter splits. Phase 2: Master assigns children of split region
instead of HRegionServer serving both children.
58. HADOOP-1710 All updates should be batch updates

View File

@ -17,7 +17,7 @@
# * See the License for the specific language governing permissions and
# * limitations under the License.
# */
#
#
usage="Usage: considerAsDead.sh --hostname serverName"
@ -50,12 +50,12 @@ do
rs_parts=(${rs//,/ })
hostname=${rs_parts[0]}
echo $deadhost
echo $hostname
echo $hostname
if [ "$deadhost" == "$hostname" ]; then
znode="$zkrs/$rs"
echo "ZNode Deleting:" $znode
$bin/hbase zkcli delete $znode > /dev/null 2>&1
sleep 1
ssh $HBASE_SSH_OPTS $hostname $remote_cmd 2>&1 | sed "s/^/$hostname: /"
fi
ssh $HBASE_SSH_OPTS $hostname $remote_cmd 2>&1 | sed "s/^/$hostname: /"
fi
done

View File

@ -74,7 +74,7 @@ check_for_znodes() {
znodes=`"$bin"/hbase zkcli ls $zparent/$zchild 2>&1 | tail -1 | sed "s/\[//" | sed "s/\]//"`
if [ "$znodes" != "" ]; then
echo -n "ZNode(s) [${znodes}] of $command are not expired. Exiting without cleaning hbase data."
echo #force a newline
echo #force a newline
exit 1;
else
echo -n "All ZNode(s) of $command are expired."
@ -99,7 +99,7 @@ execute_clean_acls() {
clean_up() {
case $1 in
--cleanZk)
--cleanZk)
execute_zk_command "deleteall ${zparent}";
;;
--cleanHdfs)
@ -120,7 +120,7 @@ clean_up() {
;;
*)
;;
esac
esac
}
check_znode_exists() {

View File

@ -103,7 +103,7 @@ do
break
fi
done
# Allow alternate hbase conf dir location.
HBASE_CONF_DIR="${HBASE_CONF_DIR:-$HBASE_HOME/conf}"
# List of hbase regions servers.

View File

@ -17,7 +17,7 @@
# * See the License for the specific language governing permissions and
# * limitations under the License.
# */
#
#
# Run a shell command on all backup master hosts.
#
# Environment Variables
@ -45,7 +45,7 @@ bin=`cd "$bin">/dev/null; pwd`
. "$bin"/hbase-config.sh
# If the master backup file is specified in the command line,
# then it takes precedence over the definition in
# then it takes precedence over the definition in
# hbase-env.sh. Save it here.
HOSTLIST=$HBASE_BACKUP_MASTERS
@ -69,6 +69,6 @@ if [ -f $HOSTLIST ]; then
sleep $HBASE_SLAVE_SLEEP
fi
done
fi
fi
wait

View File

@ -17,7 +17,7 @@
# * See the License for the specific language governing permissions and
# * limitations under the License.
# */
#
#
# Run a shell command on all regionserver hosts.
#
# Environment Variables
@ -45,7 +45,7 @@ bin=`cd "$bin">/dev/null; pwd`
. "$bin"/hbase-config.sh
# If the regionservers file is specified in the command line,
# then it takes precedence over the definition in
# then it takes precedence over the definition in
# hbase-env.sh. Save it here.
HOSTLIST=$HBASE_REGIONSERVERS

View File

@ -52,7 +52,7 @@ fi
export HBASE_LOG_PREFIX=hbase-$HBASE_IDENT_STRING-master-$HOSTNAME
export HBASE_LOGFILE=$HBASE_LOG_PREFIX.log
logout=$HBASE_LOG_DIR/$HBASE_LOG_PREFIX.out
logout=$HBASE_LOG_DIR/$HBASE_LOG_PREFIX.out
loglog="${HBASE_LOG_DIR}/${HBASE_LOGFILE}"
pid=${HBASE_PID_DIR:-/tmp}/hbase-$HBASE_IDENT_STRING-master.pid
@ -74,7 +74,7 @@ fi
# distributed == false means that the HMaster will kill ZK when it exits
# HBASE-6504 - only take the first line of the output in case verbose gc is on
distMode=`$bin/hbase --config "$HBASE_CONF_DIR" org.apache.hadoop.hbase.util.HBaseConfTool hbase.cluster.distributed | head -n 1`
if [ "$distMode" == 'true' ]
if [ "$distMode" == 'true' ]
then
"$bin"/hbase-daemons.sh --config "${HBASE_CONF_DIR}" stop zookeeper
fi

View File

@ -68,7 +68,7 @@ while [ $# -ne 0 ]; do
-h|--help)
print_usage ;;
--kill)
IS_KILL=1
IS_KILL=1
cmd_specified ;;
--show)
IS_SHOW=1
@ -106,5 +106,3 @@ else
echo "No command specified" >&2
exit 1
fi

View File

@ -17,7 +17,7 @@
# * See the License for the specific language governing permissions and
# * limitations under the License.
# */
#
#
# Run a shell command on all zookeeper hosts.
#
# Environment Variables

View File

@ -33,7 +33,7 @@
# The maximum amount of heap to use. Default is left to JVM default.
# export HBASE_HEAPSIZE=1G
# Uncomment below if you intend to use off heap cache. For example, to allocate 8G of
# Uncomment below if you intend to use off heap cache. For example, to allocate 8G of
# offheap, set the value to "8G". See http://hbase.apache.org/book.html#direct.memory
# in the refguide for guidance setting this config.
# export HBASE_OFFHEAPSIZE=1G
@ -71,7 +71,7 @@
# export CLIENT_GC_OPTS="-verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps -Xloggc:<FILE-PATH> -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=1 -XX:GCLogFileSize=512M"
# See the package documentation for org.apache.hadoop.hbase.io.hfile for other configurations
# needed setting up off-heap block caching.
# needed setting up off-heap block caching.
# Uncomment and adjust to enable JMX exporting
# See jmxremote.password and jmxremote.access in $JRE_HOME/lib/management to configure remote password access.
@ -102,7 +102,7 @@
# Where log files are stored. $HBASE_HOME/logs by default.
# export HBASE_LOG_DIR=${HBASE_HOME}/logs
# Enable remote JDWP debugging of major HBase processes. Meant for Core Developers
# Enable remote JDWP debugging of major HBase processes. Meant for Core Developers
# export HBASE_MASTER_OPTS="$HBASE_MASTER_OPTS -Xdebug -Xrunjdwp:transport=dt_socket,server=y,suspend=n,address=8070"
# export HBASE_REGIONSERVER_OPTS="$HBASE_REGIONSERVER_OPTS -Xdebug -Xrunjdwp:transport=dt_socket,server=y,suspend=n,address=8071"
# export HBASE_THRIFT_OPTS="$HBASE_THRIFT_OPTS -Xdebug -Xrunjdwp:transport=dt_socket,server=y,suspend=n,address=8072"
@ -126,13 +126,13 @@
# Tell HBase whether it should manage it's own instance of ZooKeeper or not.
# export HBASE_MANAGES_ZK=true
# The default log rolling policy is RFA, where the log file is rolled as per the size defined for the
# The default log rolling policy is RFA, where the log file is rolled as per the size defined for the
# RFA appender. Please refer to the log4j2.properties file to see more details on this appender.
# In case one needs to do log rolling on a date change, one should set the environment property
# HBASE_ROOT_LOGGER to "<DESIRED_LOG LEVEL>,DRFA".
# For example:
# export HBASE_ROOT_LOGGER=INFO,DRFA
# The reason for changing default to RFA is to avoid the boundary case of filling out disk space as
# The reason for changing default to RFA is to avoid the boundary case of filling out disk space as
# DRFA doesn't put any cap on the log size. Please refer to HBase-5655 for more context.
# Tell HBase whether it should include Hadoop's lib when start up,

View File

@ -24,20 +24,20 @@
<property>
<name>security.client.protocol.acl</name>
<value>*</value>
<description>ACL for ClientProtocol and AdminProtocol implementations (ie.
<description>ACL for ClientProtocol and AdminProtocol implementations (ie.
clients talking to HRegionServers)
The ACL is a comma-separated list of user and group names. The user and
group list is separated by a blank. For e.g. "alice,bob users,wheel".
The ACL is a comma-separated list of user and group names. The user and
group list is separated by a blank. For e.g. "alice,bob users,wheel".
A special value of "*" means all users are allowed.</description>
</property>
<property>
<name>security.admin.protocol.acl</name>
<value>*</value>
<description>ACL for HMasterInterface protocol implementation (ie.
<description>ACL for HMasterInterface protocol implementation (ie.
clients talking to HMaster for admin operations).
The ACL is a comma-separated list of user and group names. The user and
group list is separated by a blank. For e.g. "alice,bob users,wheel".
The ACL is a comma-separated list of user and group names. The user and
group list is separated by a blank. For e.g. "alice,bob users,wheel".
A special value of "*" means all users are allowed.</description>
</property>
@ -46,8 +46,8 @@
<value>*</value>
<description>ACL for HMasterRegionInterface protocol implementations
(for HRegionServers communicating with HMaster)
The ACL is a comma-separated list of user and group names. The user and
group list is separated by a blank. For e.g. "alice,bob users,wheel".
The ACL is a comma-separated list of user and group names. The user and
group list is separated by a blank. For e.g. "alice,bob users,wheel".
A special value of "*" means all users are allowed.</description>
</property>
</configuration>

View File

@ -38,4 +38,4 @@ ${type_declaration}</template><template autoinsert="true" context="classbody_con
</template><template autoinsert="true" context="catchblock_context" deleted="true" description="Code in new catch blocks" enabled="true" id="org.eclipse.jdt.ui.text.codetemplates.catchblock" name="catchblock">// ${todo} Auto-generated catch block
${exception_var}.printStackTrace();</template><template autoinsert="false" context="methodbody_context" deleted="true" description="Code in created method stubs" enabled="true" id="org.eclipse.jdt.ui.text.codetemplates.methodbody" name="methodbody">// ${todo} Implement ${enclosing_type}.${enclosing_method}
${body_statement}</template><template autoinsert="false" context="constructorbody_context" deleted="true" description="Code in created constructor stubs" enabled="true" id="org.eclipse.jdt.ui.text.codetemplates.constructorbody" name="constructorbody">${body_statement}
// ${todo} Implement constructor</template><template autoinsert="true" context="getterbody_context" deleted="true" description="Code in created getters" enabled="true" id="org.eclipse.jdt.ui.text.codetemplates.getterbody" name="getterbody">return ${field};</template><template autoinsert="true" context="setterbody_context" deleted="true" description="Code in created setters" enabled="true" id="org.eclipse.jdt.ui.text.codetemplates.setterbody" name="setterbody">${field} = ${param};</template></templates>
// ${todo} Implement constructor</template><template autoinsert="true" context="getterbody_context" deleted="true" description="Code in created getters" enabled="true" id="org.eclipse.jdt.ui.text.codetemplates.getterbody" name="getterbody">return ${field};</template><template autoinsert="true" context="setterbody_context" deleted="true" description="Code in created setters" enabled="true" id="org.eclipse.jdt.ui.text.codetemplates.setterbody" name="setterbody">${field} = ${param};</template></templates>

View File

@ -87,7 +87,7 @@ these personalities; a pre-packaged personality can be selected via the
`--project` parameter. There is a provided HBase personality in Yetus, however
the HBase project maintains its own within the HBase source repository. Specify
the path to the personality file using `--personality`. The HBase repository
places this file under `dev-support/hbase-personality.sh`.
places this file under `dev-support/hbase-personality.sh`.
## Docker mode

View File

@ -141,7 +141,7 @@ Interactions with Jira:
This invocation will build a "simple" database, correlating commits to
branches. It omits gathering the detailed release tag data, so it runs pretty
quickly.
quickly.
Example Run:

View File

@ -344,53 +344,53 @@ EOF
echo "writing out example TSV to example.tsv"
cat >"${working_dir}/example.tsv" <<EOF
row1 value8 value8
row1 value8 value8
row3 value2
row2 value9
row10 value1
row2 value9
row10 value1
pow1 value8 value8
pow3 value2
pow3 value2
pow2 value9
pow10 value1
pow10 value1
paw1 value8 value8
paw3 value2
paw2 value9
paw3 value2
paw2 value9
paw10 value1
raw1 value8 value8
raw1 value8 value8
raw3 value2
raw2 value9
raw10 value1
raw2 value9
raw10 value1
aow1 value8 value8
aow3 value2
aow3 value2
aow2 value9
aow10 value1
aow10 value1
aaw1 value8 value8
aaw3 value2
aaw2 value9
aaw3 value2
aaw2 value9
aaw10 value1
how1 value8 value8
how1 value8 value8
how3 value2
how2 value9
how10 value1
how2 value9
how10 value1
zow1 value8 value8
zow3 value2
zow3 value2
zow2 value9
zow10 value1
zow10 value1
zaw1 value8 value8
zaw3 value2
zaw2 value9
zaw3 value2
zaw2 value9
zaw10 value1
haw1 value8 value8
haw1 value8 value8
haw3 value2
haw2 value9
haw10 value1
haw2 value9
haw10 value1
low1 value8 value8
low3 value2
low3 value2
low2 value9
low10 value1
low10 value1
law1 value8 value8
law3 value2
law2 value9
law3 value2
law2 value9
law10 value1
EOF

View File

@ -53,7 +53,7 @@ runAllTests=0
#set to 1 to replay the failed tests. Previous reports are kept in
# fail_ files
replayFailed=0
replayFailed=0
#set to 0 to run all medium & large tests in a single maven operation
# instead of two
@ -85,10 +85,10 @@ mvnCommand="mvn "
function createListDeadProcess {
id=$$
listDeadProcess=""
#list of the process with a ppid of 1
sonProcess=`ps -o pid= --ppid 1`
#then the process with a pgid of the script
for pId in $sonProcess
do
@ -119,32 +119,32 @@ function cleanProcess {
jstack -F -l $pId
kill $pId
echo "kill sent, waiting for 30 seconds"
sleep 30
sleep 30
son=`ps -o pid= --pid $pId | wc -l`
if (test $son -gt 0)
then
then
echo "$pId, java sub process of $id, is still running after a standard kill, using kill -9 now"
echo "Stack for $pId before kill -9:"
jstack -F -l $pId
kill -9 $pId
echo "kill sent, waiting for 2 seconds"
sleep 2
echo "Process $pId killed by kill -9"
sleep 2
echo "Process $pId killed by kill -9"
else
echo "Process $pId killed by standard kill -15"
echo "Process $pId killed by standard kill -15"
fi
else
echo "$pId is not a java process (it's $name), I don't kill it."
fi
done
createListDeadProcess
if (test ${#listDeadProcess} -gt 0)
then
echo "There are still $sonProcess for process $id left."
else
echo "Process $id clean, no son process left"
fi
echo "Process $id clean, no son process left"
fi
}
#count the number of ',' in a string
@ -155,7 +155,7 @@ function countClasses {
count=$((cars - 1))
}
######################################### script
echo "Starting Script. Possible parameters are: runAllTests, replayFailed, nonParallelMaven"
echo "Other parameters are sent to maven"
@ -177,11 +177,11 @@ do
if [ $arg == "nonParallelMaven" ]
then
parallelMaven=0
else
args=$args" $arg"
else
args=$args" $arg"
fi
fi
fi
fi
done
@ -195,24 +195,24 @@ for testFile in $testsList
do
lenPath=$((${#rootTestClassDirectory}))
len=$((${#testFile} - $lenPath - 5)) # len(".java") == 5
shortTestFile=${testFile:lenPath:$len}
shortTestFile=${testFile:lenPath:$len}
testName=$(echo $shortTestFile | sed 's/\//\./g')
#The ',' is used in the grep pattern as we don't want to catch
# partial name
isFlaky=$((`echo $flakyTests | grep "$testName," | wc -l`))
if (test $isFlaky -eq 0)
then
then
isSmall=0
isMedium=0
isLarge=0
# determine the category of the test by greping into the source code
# determine the category of the test by greping into the source code
isMedium=`grep "@Category" $testFile | grep "MediumTests.class" | wc -l`
if (test $isMedium -eq 0)
then
if (test $isMedium -eq 0)
then
isLarge=`grep "@Category" $testFile | grep "LargeTests.class" | wc -l`
if (test $isLarge -eq 0)
then
@ -230,22 +230,22 @@ do
fi
fi
fi
#put the test in the right list
if (test $isSmall -gt 0)
then
if (test $isSmall -gt 0)
then
smallList="$smallList,$testName"
fi
if (test $isMedium -gt 0)
then
fi
if (test $isMedium -gt 0)
then
mediumList="$mediumList,$testName"
fi
if (test $isLarge -gt 0)
then
fi
if (test $isLarge -gt 0)
then
largeList="$largeList,$testName"
fi
fi
fi
fi
done
#remove the ',' at the beginning
@ -285,7 +285,7 @@ do
nextList=2
runList1=$runList1,$testClass
else
nextList=1
nextList=1
runList2=$runList2,$testClass
fi
done
@ -297,27 +297,27 @@ runList2=${runList2:1:${#runList2}}
#now we can run the tests, at last!
echo "Running small tests with one maven instance, in parallel"
#echo Small tests are $smallList
$mvnCommand -P singleJVMTests test -Dtest=$smallList $args
#echo Small tests are $smallList
$mvnCommand -P singleJVMTests test -Dtest=$smallList $args
cleanProcess
exeTime=$(((`date +%s` - $startTime)/60))
echo "Small tests executed after $exeTime minutes"
if (test $parallelMaven -gt 0)
then
then
echo "Running tests with two maven instances in parallel"
$mvnCommand -P localTests test -Dtest=$runList1 $args &
#give some time to the fist process if there is anything to compile
sleep 30
$mvnCommand -P localTests test -Dtest=$runList2 $args
#wait for forked process to finish
wait
cleanProcess
exeTime=$(((`date +%s` - $startTime)/60))
echo "Medium and large (if selected) tests executed after $exeTime minutes"
@ -329,14 +329,14 @@ then
$mvnCommand -P localTests test -Dtest=$flakyTests $args
cleanProcess
exeTime=$(((`date +%s` - $startTime)/60))
echo "Flaky tests executed after $exeTime minutes"
echo "Flaky tests executed after $exeTime minutes"
fi
else
echo "Running tests with a single maven instance, no parallelization"
$mvnCommand -P localTests test -Dtest=$runList1,$runList2,$flakyTests $args
cleanProcess
cleanProcess
exeTime=$(((`date +%s` - $startTime)/60))
echo "Single maven instance tests executed after $exeTime minutes"
echo "Single maven instance tests executed after $exeTime minutes"
fi
#let's analyze the results
@ -360,7 +360,7 @@ for testClass in `echo $fullRunList | sed 's/,/ /g'`
do
reportFile=$surefireReportDirectory/$testClass.txt
outputReportFile=$surefireReportDirectory/$testClass-output.txt
if [ -s $reportFile ];
then
isError=`grep FAILURE $reportFile | wc -l`
@ -368,22 +368,22 @@ do
then
errorList="$errorList,$testClass"
errorCounter=$(($errorCounter + 1))
#let's copy the files if we want to use it later
#let's copy the files if we want to use it later
cp $reportFile "$surefireReportDirectory/fail_$timestamp.$testClass.txt"
if [ -s $reportFile ];
then
cp $outputReportFile "$surefireReportDirectory/fail_$timestamp.$testClass"-output.txt""
fi
else
sucessCounter=$(($sucessCounter +1))
fi
fi
else
#report file does not exist or is empty => the test didn't finish
notFinishedCounter=$(($notFinishedCounter + 1))
notFinishedList="$notFinishedList,$testClass"
fi
fi
done
#list of all tests that failed
@ -411,7 +411,7 @@ echo
echo "Tests in error are: $errorPresList"
echo "Tests that didn't finish are: $notFinishedPresList"
echo
echo "Execution time in minutes: $exeTime"
echo "Execution time in minutes: $exeTime"
echo "##########################"

View File

@ -33,4 +33,3 @@ export PATH=$PATH:$JAVA_HOME/bin:$ANT_HOME/bin:
export MAVEN_OPTS="${MAVEN_OPTS:-"-Xmx3100M -XX:-UsePerfData"}"
ulimit -n

View File

@ -17,11 +17,11 @@
# specific language governing permissions and limitations
# under the License.
# This script assumes that your remote is called "origin"
# This script assumes that your remote is called "origin"
# and that your local master branch is called "master".
# I am sure it could be made more abstract but these are the defaults.
# Edit this line to point to your default directory,
# Edit this line to point to your default directory,
# or always pass a directory to the script.
DEFAULT_DIR="EDIT_ME"
@ -69,13 +69,13 @@ function check_git_branch_status {
}
function get_jira_status {
# This function expects as an argument the JIRA ID,
# This function expects as an argument the JIRA ID,
# and returns 99 if resolved and 1 if it couldn't
# get the status.
# The JIRA status looks like this in the HTML:
# The JIRA status looks like this in the HTML:
# span id="resolution-val" class="value resolved" >
# The following is a bit brittle, but filters for lines with
# The following is a bit brittle, but filters for lines with
# resolution-val returns 99 if it's resolved
jira_url='https://issues.apache.org/jira/rest/api/2/issue'
jira_id="$1"
@ -106,7 +106,7 @@ while getopts ":hd:" opt; do
print_usage
exit 0
;;
*)
*)
echo "Invalid argument: $OPTARG" >&2
print_usage >&2
exit 1
@ -135,7 +135,7 @@ get_tracking_branches
for i in "${tracking_branches[@]}"; do
git checkout -q "$i"
# Exit if git status is dirty
check_git_branch_status
check_git_branch_status
git pull -q --rebase
status=$?
if [ "$status" -ne 0 ]; then
@ -169,7 +169,7 @@ for i in "${all_branches[@]}"; do
git checkout -q "$i"
# Exit if git status is dirty
check_git_branch_status
check_git_branch_status
# If this branch has a remote, don't rebase it
# If it has a remote, it has a log with at least one entry
@ -184,7 +184,7 @@ for i in "${all_branches[@]}"; do
echo "Failed. Rolling back. Rebase $i manually."
git rebase --abort
fi
elif [ $status -ne 0 ]; then
elif [ $status -ne 0 ]; then
# If status is 0 it means there is a remote branch, we already took care of it
echo "Unknown error: $?" >&2
exit 1
@ -195,10 +195,10 @@ done
for i in "${deleted_branches[@]}"; do
read -p "$i's JIRA is resolved. Delete? " yn
case $yn in
[Yy])
[Yy])
git branch -D $i
;;
*)
*)
echo "To delete it manually, run git branch -D $deleted_branches"
;;
esac

View File

@ -52,7 +52,7 @@ if $PATCH -p0 -E --dry-run < $PATCH_FILE 2>&1 > $TMP; then
# correct place to put those files.
# NOTE 2014/07/17:
# Temporarily disabling below check since our jenkins boxes seems to be not defaulting to bash
# Temporarily disabling below check since our jenkins boxes seems to be not defaulting to bash
# causing below checks to fail. Once it is fixed, we can revert the commit and enable this again.
# TMP2=/tmp/tmp.paths.2.$$

View File

@ -32,7 +32,7 @@ options:
-h Show this message
-c Run 'mvn clean' before running the tests
-f FILE Run the additional tests listed in the FILE
-u Only run unit tests. Default is to run
-u Only run unit tests. Default is to run
unit and integration tests
-n N Run each test N times. Default = 1.
-s N Print N slowest tests
@ -92,7 +92,7 @@ do
r)
server=1
;;
?)
?)
usage
exit 1
esac
@ -175,7 +175,7 @@ done
# Print a report of the slowest running tests
if [ ! -z $showSlowest ]; then
testNameIdx=0
for (( i = 0; i < ${#test[@]}; i++ ))
do

View File

@ -29,7 +29,7 @@
#set -x
# printenv
### Setup some variables.
### Setup some variables.
bindir=$(dirname $0)
# This key is set by our surefire configuration up in the main pom.xml

View File

@ -1,4 +1,4 @@
<?xml version="1.0"?>
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="https://maven.apache.org/POM/4.0.0" xmlns:xsi="https://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="https://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd">
<!--
/**
@ -21,8 +21,8 @@
-->
<modelVersion>4.0.0</modelVersion>
<parent>
<artifactId>hbase</artifactId>
<groupId>org.apache.hbase</groupId>
<artifactId>hbase</artifactId>
<version>3.0.0-alpha-3-SNAPSHOT</version>
<relativePath>..</relativePath>
</parent>

View File

@ -15,13 +15,11 @@
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.hadoop.hbase.testclassification;
/**
* Tag a test as related to the client. This tests the hbase-client package and all of the client
* tests in hbase-server.
*
* @see org.apache.hadoop.hbase.testclassification.ClientTests
* @see org.apache.hadoop.hbase.testclassification.CoprocessorTests
* @see org.apache.hadoop.hbase.testclassification.FilterTests

View File

@ -15,12 +15,10 @@
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.hadoop.hbase.testclassification;
/**
* Tag a test as related to coprocessors.
*
* @see org.apache.hadoop.hbase.testclassification.ClientTests
* @see org.apache.hadoop.hbase.testclassification.CoprocessorTests
* @see org.apache.hadoop.hbase.testclassification.FilterTests

View File

@ -15,12 +15,10 @@
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.hadoop.hbase.testclassification;
/**
* Tag a test as related to the {@code org.apache.hadoop.hbase.filter} package.
*
* @see org.apache.hadoop.hbase.testclassification.ClientTests
* @see org.apache.hadoop.hbase.testclassification.CoprocessorTests
* @see org.apache.hadoop.hbase.testclassification.FilterTests

View File

@ -15,12 +15,10 @@
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.hadoop.hbase.testclassification;
/**
* Tag a test as failing commonly on public build infrastructure.
*
* @see org.apache.hadoop.hbase.testclassification.ClientTests
* @see org.apache.hadoop.hbase.testclassification.CoprocessorTests
* @see org.apache.hadoop.hbase.testclassification.FilterTests

View File

@ -15,13 +15,11 @@
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.hadoop.hbase.testclassification;
/**
* Tag a test as related to the {@code org.apache.hadoop.hbase.io} package. Things like HFile and
* the like.
*
* @see org.apache.hadoop.hbase.testclassification.ClientTests
* @see org.apache.hadoop.hbase.testclassification.CoprocessorTests
* @see org.apache.hadoop.hbase.testclassification.FilterTests

View File

@ -15,23 +15,20 @@
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.hadoop.hbase.testclassification;
/**
* Tag a test as 'integration/system' test, meaning that the test class has the following
* characteristics:
* <ul>
* <li> Possibly takes hours to complete</li>
* <li> Can be run on a mini cluster or an actual cluster</li>
* <li> Can make changes to the given cluster (starting stopping daemons, etc)</li>
* <li> Should not be run in parallel of other integration tests</li>
* <li>Possibly takes hours to complete</li>
* <li>Can be run on a mini cluster or an actual cluster</li>
* <li>Can make changes to the given cluster (starting stopping daemons, etc)</li>
* <li>Should not be run in parallel of other integration tests</li>
* </ul>
*
* Integration / System tests should have a class name starting with "IntegrationTest", and
* should be annotated with @Category(IntegrationTests.class). Integration tests can be run
* using the IntegrationTestsDriver class or from mvn verify.
*
* Integration / System tests should have a class name starting with "IntegrationTest", and should
* be annotated with @Category(IntegrationTests.class). Integration tests can be run using the
* IntegrationTestsDriver class or from mvn verify.
* @see SmallTests
* @see MediumTests
* @see LargeTests

View File

@ -1,4 +1,4 @@
/**
/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
@ -15,21 +15,19 @@
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.hadoop.hbase.testclassification;
/**
* Tagging a test as 'large', means that the test class has the following characteristics:
* <ul>
* <li>it can executed in an isolated JVM (Tests can however be executed in different JVM on the
* same machine simultaneously so be careful two concurrent tests end up fighting over ports
* or other singular resources).</li>
* <li>ideally, the whole large test-suite/class, no matter how many or how few test methods it
* has, will run in last less than three minutes</li>
* <li>No large test can take longer than ten minutes; it will be killed. See 'Integeration Tests'
* if you need to run tests longer than this.</li>
* <li>it can executed in an isolated JVM (Tests can however be executed in different JVM on the
* same machine simultaneously so be careful two concurrent tests end up fighting over ports or
* other singular resources).</li>
* <li>ideally, the whole large test-suite/class, no matter how many or how few test methods it has,
* will run in last less than three minutes</li>
* <li>No large test can take longer than ten minutes; it will be killed. See 'Integeration Tests'
* if you need to run tests longer than this.</li>
* </ul>
*
* @see SmallTests
* @see MediumTests
* @see IntegrationTests

View File

@ -15,12 +15,10 @@
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.hadoop.hbase.testclassification;
/**
* Tag a test as related to mapred or mapreduce.
*
* @see org.apache.hadoop.hbase.testclassification.ClientTests
* @see org.apache.hadoop.hbase.testclassification.CoprocessorTests
* @see org.apache.hadoop.hbase.testclassification.FilterTests

View File

@ -15,12 +15,10 @@
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.hadoop.hbase.testclassification;
/**
* Tag a test as related to the master.
*
* @see org.apache.hadoop.hbase.testclassification.ClientTests
* @see org.apache.hadoop.hbase.testclassification.CoprocessorTests
* @see org.apache.hadoop.hbase.testclassification.FilterTests

View File

@ -1,4 +1,4 @@
/**
/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
@ -15,21 +15,18 @@
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.hadoop.hbase.testclassification;
/**
* Tagging a test as 'medium' means that the test class has the following characteristics:
* <ul>
* <li>it can be executed in an isolated JVM (Tests can however be executed in different JVMs on
* the same machine simultaneously so be careful two concurrent tests end up fighting over ports
* or other singular resources).</li>
* <li>ideally, the whole medium test-suite/class, no matter how many or how few test methods it
* has, will complete in 50 seconds; otherwise make it a 'large' test.</li>
* <li>it can be executed in an isolated JVM (Tests can however be executed in different JVMs on the
* same machine simultaneously so be careful two concurrent tests end up fighting over ports or
* other singular resources).</li>
* <li>ideally, the whole medium test-suite/class, no matter how many or how few test methods it
* has, will complete in 50 seconds; otherwise make it a 'large' test.</li>
* </ul>
*
* Use it for tests that cannot be tagged as 'small'. Use it when you need to start up a cluster.
*
* Use it for tests that cannot be tagged as 'small'. Use it when you need to start up a cluster.
* @see SmallTests
* @see LargeTests
* @see IntegrationTests

View File

@ -15,7 +15,6 @@
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.hadoop.hbase.testclassification;
/**

View File

@ -15,12 +15,10 @@
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.hadoop.hbase.testclassification;
/**
* Tag a test as not easily falling into any of the below categories.
*
* @see org.apache.hadoop.hbase.testclassification.ClientTests
* @see org.apache.hadoop.hbase.testclassification.CoprocessorTests
* @see org.apache.hadoop.hbase.testclassification.FilterTests

View File

@ -15,12 +15,10 @@
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.hadoop.hbase.testclassification;
/**
* Tag a test as related to RPC.
*
* @see org.apache.hadoop.hbase.testclassification.ClientTests
* @see org.apache.hadoop.hbase.testclassification.CoprocessorTests
* @see org.apache.hadoop.hbase.testclassification.FilterTests

View File

@ -1,4 +1,4 @@
/**
/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information

View File

@ -15,12 +15,10 @@
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.hadoop.hbase.testclassification;
/**
* Tag a test as related to the regionserver.
*
* @see org.apache.hadoop.hbase.testclassification.ClientTests
* @see org.apache.hadoop.hbase.testclassification.CoprocessorTests
* @see org.apache.hadoop.hbase.testclassification.FilterTests

View File

@ -15,12 +15,10 @@
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.hadoop.hbase.testclassification;
/**
* Tag a test as related to replication.
*
* @see org.apache.hadoop.hbase.testclassification.ClientTests
* @see org.apache.hadoop.hbase.testclassification.CoprocessorTests
* @see org.apache.hadoop.hbase.testclassification.FilterTests

View File

@ -15,12 +15,10 @@
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.hadoop.hbase.testclassification;
/**
* Tag a test as related to the REST capability of HBase.
*
* @see org.apache.hadoop.hbase.testclassification.ClientTests
* @see org.apache.hadoop.hbase.testclassification.CoprocessorTests
* @see org.apache.hadoop.hbase.testclassification.FilterTests

View File

@ -15,12 +15,10 @@
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.hadoop.hbase.testclassification;
/**
* Tag a test as related to security.
*
* @see org.apache.hadoop.hbase.testclassification.ClientTests
* @see org.apache.hadoop.hbase.testclassification.CoprocessorTests
* @see org.apache.hadoop.hbase.testclassification.FilterTests

View File

@ -1,4 +1,4 @@
/**
/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
@ -20,14 +20,14 @@ package org.apache.hadoop.hbase.testclassification;
/**
* Tagging a test as 'small' means that the test class has the following characteristics:
* <ul>
* <li>it can be run simultaneously with other small tests all in the same JVM</li>
* <li>ideally, the WHOLE implementing test-suite/class, no matter how many or how few test
* methods it has, should take less than 15 seconds to complete</li>
* <li>it does not use a cluster</li>
* <li>it can be run simultaneously with other small tests all in the same JVM</li>
* <li>ideally, the WHOLE implementing test-suite/class, no matter how many or how few test methods
* it has, should take less than 15 seconds to complete</li>
* <li>it does not use a cluster</li>
* </ul>
*
* @see MediumTests
* @see LargeTests
* @see IntegrationTests
*/
public interface SmallTests {}
public interface SmallTests {
}

View File

@ -15,13 +15,11 @@
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.hadoop.hbase.testclassification;
/**
* Tag a test as related to mapreduce and taking longer than 5 minutes to run on public build
* Tag a test as related to mapreduce and taking longer than 5 minutes to run on public build
* infrastructure.
*
* @see org.apache.hadoop.hbase.testclassification.ClientTests
* @see org.apache.hadoop.hbase.testclassification.CoprocessorTests
* @see org.apache.hadoop.hbase.testclassification.FilterTests

View File

@ -15,13 +15,11 @@
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.hadoop.hbase.testclassification;
/**
* Tag a test as region tests which takes longer than 5 minutes to run on public build
* infrastructure.
*
* @see org.apache.hadoop.hbase.testclassification.ClientTests
* @see org.apache.hadoop.hbase.testclassification.CoprocessorTests
* @see org.apache.hadoop.hbase.testclassification.FilterTests

View File

@ -15,7 +15,6 @@
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.hadoop.hbase.testclassification;
/**

View File

@ -1,6 +1,5 @@
<?xml version="1.0"?>
<project xmlns="https://maven.apache.org/POM/4.0.0" xmlns:xsi="https://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="https://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd">
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="https://maven.apache.org/POM/4.0.0" xmlns:xsi="https://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="https://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd">
<!--
/**
* Licensed to the Apache Software Foundation (ASF) under one
@ -23,8 +22,8 @@
<modelVersion>4.0.0</modelVersion>
<parent>
<artifactId>hbase-archetypes</artifactId>
<groupId>org.apache.hbase</groupId>
<artifactId>hbase-archetypes</artifactId>
<version>3.0.0-alpha-3-SNAPSHOT</version>
<relativePath>..</relativePath>
</parent>
@ -58,10 +57,10 @@
further using xml-maven-plugin for xslt transformation, below. -->
<execution>
<id>hbase-client__copy-src-to-build-archetype-subdir</id>
<phase>generate-resources</phase>
<goals>
<goal>copy-resources</goal>
</goals>
<phase>generate-resources</phase>
<configuration>
<outputDirectory>/${project.basedir}/../${hbase-client.dir}/${build.archetype.subdir}</outputDirectory>
<resources>
@ -76,29 +75,30 @@
</execution>
<execution>
<id>hbase-client__copy-pom-to-temp-for-xslt-processing</id>
<phase>generate-resources</phase>
<goals>
<goal>copy-resources</goal>
</goals>
<phase>generate-resources</phase>
<configuration>
<outputDirectory>/${project.basedir}/../${hbase-client.dir}/${temp.exemplar.subdir}</outputDirectory>
<resources>
<resource>
<directory>/${project.basedir}/../${hbase-client.dir}</directory>
<filtering>true</filtering> <!-- filtering replaces ${project.version} with literal -->
<filtering>true</filtering>
<!-- filtering replaces ${project.version} with literal -->
<includes>
<include>pom.xml</include>
</includes>
</resource>
</resource>
</resources>
</configuration>
</execution>
<execution>
<id>hbase-shaded-client__copy-src-to-build-archetype-subdir</id>
<phase>generate-resources</phase>
<goals>
<goal>copy-resources</goal>
</goals>
<phase>generate-resources</phase>
<configuration>
<outputDirectory>/${project.basedir}/../${hbase-shaded-client.dir}/${build.archetype.subdir}</outputDirectory>
<resources>
@ -113,20 +113,21 @@
</execution>
<execution>
<id>hbase-shaded-client__copy-pom-to-temp-for-xslt-processing</id>
<phase>generate-resources</phase>
<goals>
<goal>copy-resources</goal>
</goals>
<phase>generate-resources</phase>
<configuration>
<outputDirectory>/${project.basedir}/../${hbase-shaded-client.dir}/${temp.exemplar.subdir}</outputDirectory>
<resources>
<resource>
<directory>/${project.basedir}/../${hbase-shaded-client.dir}</directory>
<filtering>true</filtering> <!-- filtering replaces ${project.version} with literal -->
<filtering>true</filtering>
<!-- filtering replaces ${project.version} with literal -->
<includes>
<include>pom.xml</include>
</includes>
</resource>
</resource>
</resources>
</configuration>
</execution>
@ -137,10 +138,10 @@
using xml-maven-plugin for xslt transformation, below. -->
<execution>
<id>hbase-client-ARCHETYPE__copy-pom-to-temp-for-xslt-processing</id>
<phase>prepare-package</phase>
<goals>
<goal>copy-resources</goal>
</goals>
<phase>prepare-package</phase>
<configuration>
<outputDirectory>/${project.basedir}/../${hbase-client.dir}/${temp.archetype.subdir}</outputDirectory>
<resources>
@ -149,16 +150,16 @@
<includes>
<include>pom.xml</include>
</includes>
</resource>
</resource>
</resources>
</configuration>
</execution>
<execution>
<id>hbase-shaded-client-ARCHETYPE__copy-pom-to-temp-for-xslt-processing</id>
<phase>prepare-package</phase>
<goals>
<goal>copy-resources</goal>
</goals>
<phase>prepare-package</phase>
<configuration>
<outputDirectory>/${project.basedir}/../${hbase-shaded-client.dir}/${temp.archetype.subdir}</outputDirectory>
<resources>
@ -167,7 +168,7 @@
<includes>
<include>pom.xml</include>
</includes>
</resource>
</resource>
</resources>
</configuration>
</execution>
@ -182,10 +183,10 @@
<!-- xml-maven-plugin modifies each exemplar project's pom.xml file to convert to standalone project. -->
<execution>
<id>modify-exemplar-pom-files-via-xslt</id>
<phase>process-resources</phase>
<goals>
<goal>transform</goal>
</goals>
<phase>process-resources</phase>
<configuration>
<transformationSets>
<transformationSet>
@ -212,10 +213,10 @@
prevent warnings when project is generated from archetype. -->
<execution>
<id>modify-archetype-pom-files-via-xslt</id>
<phase>package</phase>
<goals>
<goal>transform</goal>
</goals>
<phase>package</phase>
<configuration>
<transformationSets>
<transformationSet>
@ -242,32 +243,32 @@
</plugin>
<plugin>
<artifactId>maven-antrun-plugin</artifactId>
<artifactId>maven-antrun-plugin</artifactId>
<executions>
<!-- exec-maven-plugin executes chmod to make scripts executable -->
<execution>
<id>make-scripts-executable</id>
<phase>process-resources</phase>
<goals>
<goal>run</goal>
</goals>
<phase>process-resources</phase>
<configuration>
<chmod file="${project.basedir}/createArchetypes.sh" perm="+x" />
<chmod file="${project.basedir}/installArchetypes.sh" perm="+x" />
<chmod file="${project.basedir}/createArchetypes.sh" perm="+x"/>
<chmod file="${project.basedir}/installArchetypes.sh" perm="+x"/>
</configuration>
</execution>
<!-- exec-maven-plugin executes script which invokes 'archetype:create-from-project'
to derive archetypes from exemplar projects. -->
<execution>
<id>run-createArchetypes-script</id>
<phase>compile</phase>
<goals>
<goal>run</goal>
</goals>
<phase>compile</phase>
<configuration>
<exec executable="${shell-executable}" dir="${project.basedir}" failonerror="true">
<arg line="./createArchetypes.sh"/>
</exec>
<exec dir="${project.basedir}" executable="${shell-executable}" failonerror="true">
<arg line="./createArchetypes.sh"/>
</exec>
</configuration>
</execution>
<!-- exec-maven-plugin executes script which invokes 'install' to install each
@ -277,14 +278,14 @@
which does test generation of a project based on the archetype. -->
<execution>
<id>run-installArchetypes-script</id>
<phase>install</phase>
<goals>
<goal>run</goal>
</goals>
<phase>install</phase>
<configuration>
<exec executable="${shell-executable}" dir="${project.basedir}" failonerror="true">
<arg line="./installArchetypes.sh"/>
</exec>
<exec dir="${project.basedir}" executable="${shell-executable}" failonerror="true">
<arg line="./installArchetypes.sh"/>
</exec>
</configuration>
</execution>
</executions>

View File

@ -1,8 +1,5 @@
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="https://maven.apache.org/POM/4.0.0"
xmlns:xsi="https://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation=
"https://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd">
<project xmlns="https://maven.apache.org/POM/4.0.0" xmlns:xsi="https://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="https://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd">
<!--
/**
* Licensed to the Apache Software Foundation (ASF) under one
@ -24,8 +21,8 @@
-->
<modelVersion>4.0.0</modelVersion>
<parent>
<artifactId>hbase-archetypes</artifactId>
<groupId>org.apache.hbase</groupId>
<artifactId>hbase-archetypes</artifactId>
<version>3.0.0-alpha-3-SNAPSHOT</version>
<relativePath>..</relativePath>
</parent>

View File

@ -1,5 +1,4 @@
/**
*
/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
@ -38,19 +37,17 @@ import org.apache.hadoop.hbase.client.TableDescriptorBuilder;
import org.apache.hadoop.hbase.util.Bytes;
/**
* Successful running of this application requires access to an active instance
* of HBase. For install instructions for a standalone instance of HBase, please
* refer to https://hbase.apache.org/book.html#quickstart
* Successful running of this application requires access to an active instance of HBase. For
* install instructions for a standalone instance of HBase, please refer to
* https://hbase.apache.org/book.html#quickstart
*/
public final class HelloHBase {
protected static final String MY_NAMESPACE_NAME = "myTestNamespace";
static final TableName MY_TABLE_NAME = TableName.valueOf("myTestTable");
static final byte[] MY_COLUMN_FAMILY_NAME = Bytes.toBytes("cf");
static final byte[] MY_FIRST_COLUMN_QUALIFIER
= Bytes.toBytes("myFirstColumn");
static final byte[] MY_SECOND_COLUMN_QUALIFIER
= Bytes.toBytes("mySecondColumn");
static final byte[] MY_FIRST_COLUMN_QUALIFIER = Bytes.toBytes("myFirstColumn");
static final byte[] MY_SECOND_COLUMN_QUALIFIER = Bytes.toBytes("mySecondColumn");
static final byte[] MY_ROW_ID = Bytes.toBytes("rowId01");
// Private constructor included here to avoid checkstyle warnings
@ -61,20 +58,20 @@ public final class HelloHBase {
final boolean deleteAllAtEOJ = true;
/**
* ConnectionFactory#createConnection() automatically looks for
* hbase-site.xml (HBase configuration parameters) on the system's
* CLASSPATH, to enable creation of Connection to HBase via ZooKeeper.
* ConnectionFactory#createConnection() automatically looks for hbase-site.xml (HBase
* configuration parameters) on the system's CLASSPATH, to enable creation of Connection to
* HBase via ZooKeeper.
*/
try (Connection connection = ConnectionFactory.createConnection();
Admin admin = connection.getAdmin()) {
Admin admin = connection.getAdmin()) {
admin.getClusterMetrics(); // assure connection successfully established
System.out.println("\n*** Hello HBase! -- Connection has been "
+ "established via ZooKeeper!!\n");
System.out
.println("\n*** Hello HBase! -- Connection has been " + "established via ZooKeeper!!\n");
createNamespaceAndTable(admin);
System.out.println("Getting a Table object for [" + MY_TABLE_NAME
+ "] with which to perform CRUD operations in HBase.");
+ "] with which to perform CRUD operations in HBase.");
try (Table table = connection.getTable(MY_TABLE_NAME)) {
putRowToTable(table);
@ -92,9 +89,8 @@ public final class HelloHBase {
}
/**
* Invokes Admin#createNamespace and Admin#createTable to create a namespace
* with a table that has one column-family.
*
* Invokes Admin#createNamespace and Admin#createTable to create a namespace with a table that has
* one column-family.
* @param admin Standard Admin object
* @throws IOException If IO problem encountered
*/
@ -103,48 +99,38 @@ public final class HelloHBase {
if (!namespaceExists(admin, MY_NAMESPACE_NAME)) {
System.out.println("Creating Namespace [" + MY_NAMESPACE_NAME + "].");
admin.createNamespace(NamespaceDescriptor
.create(MY_NAMESPACE_NAME).build());
admin.createNamespace(NamespaceDescriptor.create(MY_NAMESPACE_NAME).build());
}
if (!admin.tableExists(MY_TABLE_NAME)) {
System.out.println("Creating Table [" + MY_TABLE_NAME.getNameAsString()
+ "], with one Column Family ["
+ Bytes.toString(MY_COLUMN_FAMILY_NAME) + "].");
+ "], with one Column Family [" + Bytes.toString(MY_COLUMN_FAMILY_NAME) + "].");
TableDescriptor desc = TableDescriptorBuilder.newBuilder(MY_TABLE_NAME)
.setColumnFamily(ColumnFamilyDescriptorBuilder.of(MY_COLUMN_FAMILY_NAME))
.build();
.setColumnFamily(ColumnFamilyDescriptorBuilder.of(MY_COLUMN_FAMILY_NAME)).build();
admin.createTable(desc);
}
}
/**
* Invokes Table#put to store a row (with two new columns created 'on the
* fly') into the table.
*
* Invokes Table#put to store a row (with two new columns created 'on the fly') into the table.
* @param table Standard Table object (used for CRUD operations).
* @throws IOException If IO problem encountered
*/
static void putRowToTable(final Table table) throws IOException {
table.put(new Put(MY_ROW_ID).addColumn(MY_COLUMN_FAMILY_NAME,
MY_FIRST_COLUMN_QUALIFIER,
Bytes.toBytes("Hello")).addColumn(MY_COLUMN_FAMILY_NAME,
MY_SECOND_COLUMN_QUALIFIER,
Bytes.toBytes("World!")));
table.put(new Put(MY_ROW_ID)
.addColumn(MY_COLUMN_FAMILY_NAME, MY_FIRST_COLUMN_QUALIFIER, Bytes.toBytes("Hello"))
.addColumn(MY_COLUMN_FAMILY_NAME, MY_SECOND_COLUMN_QUALIFIER, Bytes.toBytes("World!")));
System.out.println("Row [" + Bytes.toString(MY_ROW_ID)
+ "] was put into Table ["
+ table.getName().getNameAsString() + "] in HBase;\n"
+ " the row's two columns (created 'on the fly') are: ["
+ Bytes.toString(MY_COLUMN_FAMILY_NAME) + ":"
+ Bytes.toString(MY_FIRST_COLUMN_QUALIFIER)
+ "] and [" + Bytes.toString(MY_COLUMN_FAMILY_NAME) + ":"
+ Bytes.toString(MY_SECOND_COLUMN_QUALIFIER) + "]");
System.out.println("Row [" + Bytes.toString(MY_ROW_ID) + "] was put into Table ["
+ table.getName().getNameAsString() + "] in HBase;\n"
+ " the row's two columns (created 'on the fly') are: ["
+ Bytes.toString(MY_COLUMN_FAMILY_NAME) + ":" + Bytes.toString(MY_FIRST_COLUMN_QUALIFIER)
+ "] and [" + Bytes.toString(MY_COLUMN_FAMILY_NAME) + ":"
+ Bytes.toString(MY_SECOND_COLUMN_QUALIFIER) + "]");
}
/**
* Invokes Table#get and prints out the contents of the retrieved row.
*
* @param table Standard Table object
* @throws IOException If IO problem encountered
*/
@ -152,38 +138,32 @@ public final class HelloHBase {
Result row = table.get(new Get(MY_ROW_ID));
System.out.println("Row [" + Bytes.toString(row.getRow())
+ "] was retrieved from Table ["
+ table.getName().getNameAsString()
+ "] in HBase, with the following content:");
System.out.println("Row [" + Bytes.toString(row.getRow()) + "] was retrieved from Table ["
+ table.getName().getNameAsString() + "] in HBase, with the following content:");
for (Entry<byte[], NavigableMap<byte[], byte[]>> colFamilyEntry
: row.getNoVersionMap().entrySet()) {
for (Entry<byte[], NavigableMap<byte[], byte[]>> colFamilyEntry : row.getNoVersionMap()
.entrySet()) {
String columnFamilyName = Bytes.toString(colFamilyEntry.getKey());
System.out.println(" Columns in Column Family [" + columnFamilyName
+ "]:");
System.out.println(" Columns in Column Family [" + columnFamilyName + "]:");
for (Entry<byte[], byte[]> columnNameAndValueMap
: colFamilyEntry.getValue().entrySet()) {
for (Entry<byte[], byte[]> columnNameAndValueMap : colFamilyEntry.getValue().entrySet()) {
System.out.println(" Value of Column [" + columnFamilyName + ":"
+ Bytes.toString(columnNameAndValueMap.getKey()) + "] == "
+ Bytes.toString(columnNameAndValueMap.getValue()));
+ Bytes.toString(columnNameAndValueMap.getKey()) + "] == "
+ Bytes.toString(columnNameAndValueMap.getValue()));
}
}
}
/**
* Checks to see whether a namespace exists.
*
* @param admin Standard Admin object
* @param admin Standard Admin object
* @param namespaceName Name of namespace
* @return true If namespace exists
* @throws IOException If IO problem encountered
*/
static boolean namespaceExists(final Admin admin, final String namespaceName)
throws IOException {
static boolean namespaceExists(final Admin admin, final String namespaceName) throws IOException {
try {
admin.getNamespaceDescriptor(namespaceName);
} catch (NamespaceNotFoundException e) {
@ -194,28 +174,24 @@ public final class HelloHBase {
/**
* Invokes Table#delete to delete test data (i.e. the row)
*
* @param table Standard Table object
* @throws IOException If IO problem is encountered
*/
static void deleteRow(final Table table) throws IOException {
System.out.println("Deleting row [" + Bytes.toString(MY_ROW_ID)
+ "] from Table ["
+ table.getName().getNameAsString() + "].");
System.out.println("Deleting row [" + Bytes.toString(MY_ROW_ID) + "] from Table ["
+ table.getName().getNameAsString() + "].");
table.delete(new Delete(MY_ROW_ID));
}
/**
* Invokes Admin#disableTable, Admin#deleteTable, and Admin#deleteNamespace to
* disable/delete Table and delete Namespace.
*
* Invokes Admin#disableTable, Admin#deleteTable, and Admin#deleteNamespace to disable/delete
* Table and delete Namespace.
* @param admin Standard Admin object
* @throws IOException If IO problem is encountered
*/
static void deleteNamespaceAndTable(final Admin admin) throws IOException {
if (admin.tableExists(MY_TABLE_NAME)) {
System.out.println("Disabling/deleting Table ["
+ MY_TABLE_NAME.getNameAsString() + "].");
System.out.println("Disabling/deleting Table [" + MY_TABLE_NAME.getNameAsString() + "].");
admin.disableTable(MY_TABLE_NAME); // Disable a table before deleting it.
admin.deleteTable(MY_TABLE_NAME);
}

View File

@ -1,4 +1,4 @@
/**
/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
@ -44,10 +44,9 @@ public class TestHelloHBase {
@ClassRule
public static final HBaseClassTestRule CLASS_RULE =
HBaseClassTestRule.forClass(TestHelloHBase.class);
HBaseClassTestRule.forClass(TestHelloHBase.class);
private static final HBaseTestingUtil TEST_UTIL
= new HBaseTestingUtil();
private static final HBaseTestingUtil TEST_UTIL = new HBaseTestingUtil();
@BeforeClass
public static void beforeClass() throws Exception {
@ -67,13 +66,11 @@ public class TestHelloHBase {
Admin admin = TEST_UTIL.getAdmin();
exists = HelloHBase.namespaceExists(admin, NONEXISTENT_NAMESPACE);
assertEquals("#namespaceExists failed: found nonexistent namespace.",
false, exists);
assertEquals("#namespaceExists failed: found nonexistent namespace.", false, exists);
admin.createNamespace(NamespaceDescriptor.create(EXISTING_NAMESPACE).build());
exists = HelloHBase.namespaceExists(admin, EXISTING_NAMESPACE);
assertEquals("#namespaceExists failed: did NOT find existing namespace.",
true, exists);
assertEquals("#namespaceExists failed: did NOT find existing namespace.", true, exists);
admin.deleteNamespace(EXISTING_NAMESPACE);
}
@ -82,14 +79,11 @@ public class TestHelloHBase {
Admin admin = TEST_UTIL.getAdmin();
HelloHBase.createNamespaceAndTable(admin);
boolean namespaceExists
= HelloHBase.namespaceExists(admin, HelloHBase.MY_NAMESPACE_NAME);
assertEquals("#createNamespaceAndTable failed to create namespace.",
true, namespaceExists);
boolean namespaceExists = HelloHBase.namespaceExists(admin, HelloHBase.MY_NAMESPACE_NAME);
assertEquals("#createNamespaceAndTable failed to create namespace.", true, namespaceExists);
boolean tableExists = admin.tableExists(HelloHBase.MY_TABLE_NAME);
assertEquals("#createNamespaceAndTable failed to create table.",
true, tableExists);
assertEquals("#createNamespaceAndTable failed to create table.", true, tableExists);
admin.disableTable(HelloHBase.MY_TABLE_NAME);
admin.deleteTable(HelloHBase.MY_TABLE_NAME);
@ -100,8 +94,7 @@ public class TestHelloHBase {
public void testPutRowToTable() throws IOException {
Admin admin = TEST_UTIL.getAdmin();
admin.createNamespace(NamespaceDescriptor.create(HelloHBase.MY_NAMESPACE_NAME).build());
Table table
= TEST_UTIL.createTable(HelloHBase.MY_TABLE_NAME, HelloHBase.MY_COLUMN_FAMILY_NAME);
Table table = TEST_UTIL.createTable(HelloHBase.MY_TABLE_NAME, HelloHBase.MY_COLUMN_FAMILY_NAME);
HelloHBase.putRowToTable(table);
Result row = table.get(new Get(HelloHBase.MY_ROW_ID));
@ -115,13 +108,10 @@ public class TestHelloHBase {
public void testDeleteRow() throws IOException {
Admin admin = TEST_UTIL.getAdmin();
admin.createNamespace(NamespaceDescriptor.create(HelloHBase.MY_NAMESPACE_NAME).build());
Table table
= TEST_UTIL.createTable(HelloHBase.MY_TABLE_NAME, HelloHBase.MY_COLUMN_FAMILY_NAME);
Table table = TEST_UTIL.createTable(HelloHBase.MY_TABLE_NAME, HelloHBase.MY_COLUMN_FAMILY_NAME);
table.put(new Put(HelloHBase.MY_ROW_ID).
addColumn(HelloHBase.MY_COLUMN_FAMILY_NAME,
HelloHBase.MY_FIRST_COLUMN_QUALIFIER,
Bytes.toBytes("xyz")));
table.put(new Put(HelloHBase.MY_ROW_ID).addColumn(HelloHBase.MY_COLUMN_FAMILY_NAME,
HelloHBase.MY_FIRST_COLUMN_QUALIFIER, Bytes.toBytes("xyz")));
HelloHBase.deleteRow(table);
Result row = table.get(new Get(HelloHBase.MY_ROW_ID));
assertEquals("#deleteRow failed to delete row.", true, row.isEmpty());

View File

@ -1,8 +1,5 @@
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="https://maven.apache.org/POM/4.0.0"
xmlns:xsi="https://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation=
"https://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd">
<project xmlns="https://maven.apache.org/POM/4.0.0" xmlns:xsi="https://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="https://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd">
<!--
/**
* Licensed to the Apache Software Foundation (ASF) under one
@ -24,8 +21,8 @@
-->
<modelVersion>4.0.0</modelVersion>
<parent>
<artifactId>hbase-archetypes</artifactId>
<groupId>org.apache.hbase</groupId>
<artifactId>hbase-archetypes</artifactId>
<version>3.0.0-alpha-3-SNAPSHOT</version>
<relativePath>..</relativePath>
</parent>
@ -44,16 +41,16 @@
<groupId>org.apache.hbase</groupId>
<artifactId>hbase-testing-util</artifactId>
<scope>test</scope>
<exclusions>
<exclusion>
<groupId>javax.xml.bind</groupId>
<artifactId>jaxb-api</artifactId>
</exclusion>
<exclusion>
<groupId>javax.ws.rs</groupId>
<artifactId>jsr311-api</artifactId>
</exclusion>
</exclusions>
<exclusions>
<exclusion>
<groupId>javax.xml.bind</groupId>
<artifactId>jaxb-api</artifactId>
</exclusion>
<exclusion>
<groupId>javax.ws.rs</groupId>
<artifactId>jsr311-api</artifactId>
</exclusion>
</exclusions>
</dependency>
<dependency>
<groupId>org.apache.hbase</groupId>

View File

@ -1,5 +1,4 @@
/**
*
/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
@ -37,19 +36,17 @@ import org.apache.hadoop.hbase.client.TableDescriptorBuilder;
import org.apache.hadoop.hbase.util.Bytes;
/**
* Successful running of this application requires access to an active instance
* of HBase. For install instructions for a standalone instance of HBase, please
* refer to https://hbase.apache.org/book.html#quickstart
* Successful running of this application requires access to an active instance of HBase. For
* install instructions for a standalone instance of HBase, please refer to
* https://hbase.apache.org/book.html#quickstart
*/
public final class HelloHBase {
protected static final String MY_NAMESPACE_NAME = "myTestNamespace";
static final TableName MY_TABLE_NAME = TableName.valueOf("myTestTable");
static final byte[] MY_COLUMN_FAMILY_NAME = Bytes.toBytes("cf");
static final byte[] MY_FIRST_COLUMN_QUALIFIER
= Bytes.toBytes("myFirstColumn");
static final byte[] MY_SECOND_COLUMN_QUALIFIER
= Bytes.toBytes("mySecondColumn");
static final byte[] MY_FIRST_COLUMN_QUALIFIER = Bytes.toBytes("myFirstColumn");
static final byte[] MY_SECOND_COLUMN_QUALIFIER = Bytes.toBytes("mySecondColumn");
static final byte[] MY_ROW_ID = Bytes.toBytes("rowId01");
// Private constructor included here to avoid checkstyle warnings
@ -60,20 +57,20 @@ public final class HelloHBase {
final boolean deleteAllAtEOJ = true;
/**
* ConnectionFactory#createConnection() automatically looks for
* hbase-site.xml (HBase configuration parameters) on the system's
* CLASSPATH, to enable creation of Connection to HBase via ZooKeeper.
* ConnectionFactory#createConnection() automatically looks for hbase-site.xml (HBase
* configuration parameters) on the system's CLASSPATH, to enable creation of Connection to
* HBase via ZooKeeper.
*/
try (Connection connection = ConnectionFactory.createConnection();
Admin admin = connection.getAdmin()) {
Admin admin = connection.getAdmin()) {
admin.getClusterMetrics(); // assure connection successfully established
System.out.println("\n*** Hello HBase! -- Connection has been "
+ "established via ZooKeeper!!\n");
System.out
.println("\n*** Hello HBase! -- Connection has been " + "established via ZooKeeper!!\n");
createNamespaceAndTable(admin);
System.out.println("Getting a Table object for [" + MY_TABLE_NAME
+ "] with which to perform CRUD operations in HBase.");
+ "] with which to perform CRUD operations in HBase.");
try (Table table = connection.getTable(MY_TABLE_NAME)) {
putRowToTable(table);
@ -91,9 +88,8 @@ public final class HelloHBase {
}
/**
* Invokes Admin#createNamespace and Admin#createTable to create a namespace
* with a table that has one column-family.
*
* Invokes Admin#createNamespace and Admin#createTable to create a namespace with a table that has
* one column-family.
* @param admin Standard Admin object
* @throws IOException If IO problem encountered
*/
@ -102,13 +98,11 @@ public final class HelloHBase {
if (!namespaceExists(admin, MY_NAMESPACE_NAME)) {
System.out.println("Creating Namespace [" + MY_NAMESPACE_NAME + "].");
admin.createNamespace(NamespaceDescriptor
.create(MY_NAMESPACE_NAME).build());
admin.createNamespace(NamespaceDescriptor.create(MY_NAMESPACE_NAME).build());
}
if (!admin.tableExists(MY_TABLE_NAME)) {
System.out.println("Creating Table [" + MY_TABLE_NAME.getNameAsString()
+ "], with one Column Family ["
+ Bytes.toString(MY_COLUMN_FAMILY_NAME) + "].");
+ "], with one Column Family [" + Bytes.toString(MY_COLUMN_FAMILY_NAME) + "].");
admin.createTable(TableDescriptorBuilder.newBuilder(MY_TABLE_NAME)
.setColumnFamily(ColumnFamilyDescriptorBuilder.of(MY_COLUMN_FAMILY_NAME)).build());
@ -116,33 +110,26 @@ public final class HelloHBase {
}
/**
* Invokes Table#put to store a row (with two new columns created 'on the
* fly') into the table.
*
* Invokes Table#put to store a row (with two new columns created 'on the fly') into the table.
* @param table Standard Table object (used for CRUD operations).
* @throws IOException If IO problem encountered
*/
static void putRowToTable(final Table table) throws IOException {
table.put(new Put(MY_ROW_ID).addColumn(MY_COLUMN_FAMILY_NAME,
MY_FIRST_COLUMN_QUALIFIER,
Bytes.toBytes("Hello")).addColumn(MY_COLUMN_FAMILY_NAME,
MY_SECOND_COLUMN_QUALIFIER,
Bytes.toBytes("World!")));
table.put(new Put(MY_ROW_ID)
.addColumn(MY_COLUMN_FAMILY_NAME, MY_FIRST_COLUMN_QUALIFIER, Bytes.toBytes("Hello"))
.addColumn(MY_COLUMN_FAMILY_NAME, MY_SECOND_COLUMN_QUALIFIER, Bytes.toBytes("World!")));
System.out.println("Row [" + Bytes.toString(MY_ROW_ID)
+ "] was put into Table ["
+ table.getName().getNameAsString() + "] in HBase;\n"
+ " the row's two columns (created 'on the fly') are: ["
+ Bytes.toString(MY_COLUMN_FAMILY_NAME) + ":"
+ Bytes.toString(MY_FIRST_COLUMN_QUALIFIER)
+ "] and [" + Bytes.toString(MY_COLUMN_FAMILY_NAME) + ":"
+ Bytes.toString(MY_SECOND_COLUMN_QUALIFIER) + "]");
System.out.println("Row [" + Bytes.toString(MY_ROW_ID) + "] was put into Table ["
+ table.getName().getNameAsString() + "] in HBase;\n"
+ " the row's two columns (created 'on the fly') are: ["
+ Bytes.toString(MY_COLUMN_FAMILY_NAME) + ":" + Bytes.toString(MY_FIRST_COLUMN_QUALIFIER)
+ "] and [" + Bytes.toString(MY_COLUMN_FAMILY_NAME) + ":"
+ Bytes.toString(MY_SECOND_COLUMN_QUALIFIER) + "]");
}
/**
* Invokes Table#get and prints out the contents of the retrieved row.
*
* @param table Standard Table object
* @throws IOException If IO problem encountered
*/
@ -150,38 +137,32 @@ public final class HelloHBase {
Result row = table.get(new Get(MY_ROW_ID));
System.out.println("Row [" + Bytes.toString(row.getRow())
+ "] was retrieved from Table ["
+ table.getName().getNameAsString()
+ "] in HBase, with the following content:");
System.out.println("Row [" + Bytes.toString(row.getRow()) + "] was retrieved from Table ["
+ table.getName().getNameAsString() + "] in HBase, with the following content:");
for (Entry<byte[], NavigableMap<byte[], byte[]>> colFamilyEntry
: row.getNoVersionMap().entrySet()) {
for (Entry<byte[], NavigableMap<byte[], byte[]>> colFamilyEntry : row.getNoVersionMap()
.entrySet()) {
String columnFamilyName = Bytes.toString(colFamilyEntry.getKey());
System.out.println(" Columns in Column Family [" + columnFamilyName
+ "]:");
System.out.println(" Columns in Column Family [" + columnFamilyName + "]:");
for (Entry<byte[], byte[]> columnNameAndValueMap
: colFamilyEntry.getValue().entrySet()) {
for (Entry<byte[], byte[]> columnNameAndValueMap : colFamilyEntry.getValue().entrySet()) {
System.out.println(" Value of Column [" + columnFamilyName + ":"
+ Bytes.toString(columnNameAndValueMap.getKey()) + "] == "
+ Bytes.toString(columnNameAndValueMap.getValue()));
+ Bytes.toString(columnNameAndValueMap.getKey()) + "] == "
+ Bytes.toString(columnNameAndValueMap.getValue()));
}
}
}
/**
* Checks to see whether a namespace exists.
*
* @param admin Standard Admin object
* @param admin Standard Admin object
* @param namespaceName Name of namespace
* @return true If namespace exists
* @throws IOException If IO problem encountered
*/
static boolean namespaceExists(final Admin admin, final String namespaceName)
throws IOException {
static boolean namespaceExists(final Admin admin, final String namespaceName) throws IOException {
try {
admin.getNamespaceDescriptor(namespaceName);
} catch (NamespaceNotFoundException e) {
@ -192,28 +173,24 @@ public final class HelloHBase {
/**
* Invokes Table#delete to delete test data (i.e. the row)
*
* @param table Standard Table object
* @throws IOException If IO problem is encountered
*/
static void deleteRow(final Table table) throws IOException {
System.out.println("Deleting row [" + Bytes.toString(MY_ROW_ID)
+ "] from Table ["
+ table.getName().getNameAsString() + "].");
System.out.println("Deleting row [" + Bytes.toString(MY_ROW_ID) + "] from Table ["
+ table.getName().getNameAsString() + "].");
table.delete(new Delete(MY_ROW_ID));
}
/**
* Invokes Admin#disableTable, Admin#deleteTable, and Admin#deleteNamespace to
* disable/delete Table and delete Namespace.
*
* Invokes Admin#disableTable, Admin#deleteTable, and Admin#deleteNamespace to disable/delete
* Table and delete Namespace.
* @param admin Standard Admin object
* @throws IOException If IO problem is encountered
*/
static void deleteNamespaceAndTable(final Admin admin) throws IOException {
if (admin.tableExists(MY_TABLE_NAME)) {
System.out.println("Disabling/deleting Table ["
+ MY_TABLE_NAME.getNameAsString() + "].");
System.out.println("Disabling/deleting Table [" + MY_TABLE_NAME.getNameAsString() + "].");
admin.disableTable(MY_TABLE_NAME); // Disable a table before deleting it.
admin.deleteTable(MY_TABLE_NAME);
}

View File

@ -1,4 +1,4 @@
/**
/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
@ -44,10 +44,9 @@ public class TestHelloHBase {
@ClassRule
public static final HBaseClassTestRule CLASS_RULE =
HBaseClassTestRule.forClass(TestHelloHBase.class);
HBaseClassTestRule.forClass(TestHelloHBase.class);
private static final HBaseTestingUtil TEST_UTIL
= new HBaseTestingUtil();
private static final HBaseTestingUtil TEST_UTIL = new HBaseTestingUtil();
@BeforeClass
public static void beforeClass() throws Exception {
@ -67,13 +66,11 @@ public class TestHelloHBase {
Admin admin = TEST_UTIL.getAdmin();
exists = HelloHBase.namespaceExists(admin, NONEXISTENT_NAMESPACE);
assertEquals("#namespaceExists failed: found nonexistent namespace.",
false, exists);
assertEquals("#namespaceExists failed: found nonexistent namespace.", false, exists);
admin.createNamespace(NamespaceDescriptor.create(EXISTING_NAMESPACE).build());
exists = HelloHBase.namespaceExists(admin, EXISTING_NAMESPACE);
assertEquals("#namespaceExists failed: did NOT find existing namespace.",
true, exists);
assertEquals("#namespaceExists failed: did NOT find existing namespace.", true, exists);
admin.deleteNamespace(EXISTING_NAMESPACE);
}
@ -82,14 +79,11 @@ public class TestHelloHBase {
Admin admin = TEST_UTIL.getAdmin();
HelloHBase.createNamespaceAndTable(admin);
boolean namespaceExists
= HelloHBase.namespaceExists(admin, HelloHBase.MY_NAMESPACE_NAME);
assertEquals("#createNamespaceAndTable failed to create namespace.",
true, namespaceExists);
boolean namespaceExists = HelloHBase.namespaceExists(admin, HelloHBase.MY_NAMESPACE_NAME);
assertEquals("#createNamespaceAndTable failed to create namespace.", true, namespaceExists);
boolean tableExists = admin.tableExists(HelloHBase.MY_TABLE_NAME);
assertEquals("#createNamespaceAndTable failed to create table.",
true, tableExists);
assertEquals("#createNamespaceAndTable failed to create table.", true, tableExists);
admin.disableTable(HelloHBase.MY_TABLE_NAME);
admin.deleteTable(HelloHBase.MY_TABLE_NAME);
@ -100,8 +94,7 @@ public class TestHelloHBase {
public void testPutRowToTable() throws IOException {
Admin admin = TEST_UTIL.getAdmin();
admin.createNamespace(NamespaceDescriptor.create(HelloHBase.MY_NAMESPACE_NAME).build());
Table table
= TEST_UTIL.createTable(HelloHBase.MY_TABLE_NAME, HelloHBase.MY_COLUMN_FAMILY_NAME);
Table table = TEST_UTIL.createTable(HelloHBase.MY_TABLE_NAME, HelloHBase.MY_COLUMN_FAMILY_NAME);
HelloHBase.putRowToTable(table);
Result row = table.get(new Get(HelloHBase.MY_ROW_ID));
@ -115,13 +108,10 @@ public class TestHelloHBase {
public void testDeleteRow() throws IOException {
Admin admin = TEST_UTIL.getAdmin();
admin.createNamespace(NamespaceDescriptor.create(HelloHBase.MY_NAMESPACE_NAME).build());
Table table
= TEST_UTIL.createTable(HelloHBase.MY_TABLE_NAME, HelloHBase.MY_COLUMN_FAMILY_NAME);
Table table = TEST_UTIL.createTable(HelloHBase.MY_TABLE_NAME, HelloHBase.MY_COLUMN_FAMILY_NAME);
table.put(new Put(HelloHBase.MY_ROW_ID).
addColumn(HelloHBase.MY_COLUMN_FAMILY_NAME,
HelloHBase.MY_FIRST_COLUMN_QUALIFIER,
Bytes.toBytes("xyz")));
table.put(new Put(HelloHBase.MY_ROW_ID).addColumn(HelloHBase.MY_COLUMN_FAMILY_NAME,
HelloHBase.MY_FIRST_COLUMN_QUALIFIER, Bytes.toBytes("xyz")));
HelloHBase.deleteRow(table);
Result row = table.get(new Get(HelloHBase.MY_ROW_ID));
assertEquals("#deleteRow failed to delete row.", true, row.isEmpty());

View File

@ -1,6 +1,5 @@
<?xml version="1.0"?>
<project xmlns="https://maven.apache.org/POM/4.0.0" xmlns:xsi="https://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="https://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd">
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="https://maven.apache.org/POM/4.0.0" xmlns:xsi="https://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="https://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd">
<!--
/**
* Licensed to the Apache Software Foundation (ASF) under one
@ -22,8 +21,8 @@
-->
<modelVersion>4.0.0</modelVersion>
<parent>
<artifactId>hbase-build-configuration</artifactId>
<groupId>org.apache.hbase</groupId>
<artifactId>hbase-build-configuration</artifactId>
<version>3.0.0-alpha-3-SNAPSHOT</version>
<relativePath>../hbase-build-configuration</relativePath>
</parent>
@ -68,10 +67,10 @@
<artifactId>spotbugs-maven-plugin</artifactId>
<executions>
<execution>
<inherited>false</inherited>
<goals>
<goal>spotbugs</goal>
</goals>
<inherited>false</inherited>
<configuration>
<excludeFilterFile>${project.basedir}/../dev-support/spotbugs-exclude.xml</excludeFilterFile>
</configuration>

View File

@ -1,4 +1,4 @@
<?xml version="1.0"?>
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="https://maven.apache.org/POM/4.0.0" xmlns:xsi="https://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="https://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd">
<!--
/**
@ -21,160 +21,18 @@
-->
<modelVersion>4.0.0</modelVersion>
<parent>
<artifactId>hbase-build-configuration</artifactId>
<groupId>org.apache.hbase</groupId>
<artifactId>hbase-build-configuration</artifactId>
<version>3.0.0-alpha-3-SNAPSHOT</version>
<relativePath>../hbase-build-configuration</relativePath>
</parent>
<artifactId>hbase-assembly</artifactId>
<name>Apache HBase - Assembly</name>
<description>
Module that does project assembly and that is all that it does.
</description>
<packaging>pom</packaging>
<name>Apache HBase - Assembly</name>
<description>Module that does project assembly and that is all that it does.</description>
<properties>
<license.bundles.dependencies>true</license.bundles.dependencies>
</properties>
<build>
<plugins>
<!-- licensing info from our dependencies -->
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-remote-resources-plugin</artifactId>
<executions>
<execution>
<id>aggregate-licenses</id>
<goals>
<goal>process</goal>
</goals>
<configuration>
<properties>
<copyright-end-year>${build.year}</copyright-end-year>
<debug-print-included-work-info>${license.debug.print.included}</debug-print-included-work-info>
<bundled-dependencies>${license.bundles.dependencies}</bundled-dependencies>
<bundled-jquery>${license.bundles.jquery}</bundled-jquery>
<bundled-vega>${license.bundles.vega}</bundled-vega>
<bundled-logo>${license.bundles.logo}</bundled-logo>
<bundled-bootstrap>${license.bundles.bootstrap}</bundled-bootstrap>
</properties>
<resourceBundles>
<resourceBundle>${project.groupId}:hbase-resource-bundle:${project.version}</resourceBundle>
</resourceBundles>
<supplementalModelArtifacts>
<supplementalModelArtifact>${project.groupId}:hbase-resource-bundle:${project.version}</supplementalModelArtifact>
</supplementalModelArtifacts>
<supplementalModels>
<supplementalModel>supplemental-models.xml</supplementalModel>
</supplementalModels>
</configuration>
</execution>
</executions>
</plugin>
<plugin>
<artifactId>maven-assembly-plugin</artifactId>
<configuration>
<!--Else will use hbase-assembly as final name.-->
<finalName>hbase-${project.version}</finalName>
<skipAssembly>false</skipAssembly>
<appendAssemblyId>true</appendAssemblyId>
<tarLongFileMode>posix</tarLongFileMode>
<descriptors>
<descriptor>${assembly.file}</descriptor>
<descriptor>src/main/assembly/client.xml</descriptor>
</descriptors>
</configuration>
</plugin>
<plugin>
<artifactId>maven-dependency-plugin</artifactId>
<executions>
<execution>
<!-- generates the file that will be used by the bin/hbase script in the dev env -->
<id>create-hbase-generated-classpath</id>
<phase>test</phase>
<goals>
<goal>build-classpath</goal>
</goals>
<configuration>
<outputFile>${project.parent.basedir}/target/cached_classpath.txt</outputFile>
<excludeArtifactIds>jline,jruby-complete,hbase-shaded-client,hbase-shaded-client-byo-hadoop,hbase-shaded-mapreduce</excludeArtifactIds>
</configuration>
</execution>
<execution>
<!-- generates the file that will be used by the bin/hbase zkcli script in the dev env -->
<id>create-hbase-generated-classpath-jline</id>
<phase>test</phase>
<goals>
<goal>build-classpath</goal>
</goals>
<configuration>
<outputFile>${project.parent.basedir}/target/cached_classpath_jline.txt</outputFile>
<includeArtifactIds>jline</includeArtifactIds>
</configuration>
</execution>
<execution>
<!-- generates the file that will be used by the bin/hbase shell script in the dev env -->
<id>create-hbase-generated-classpath-jruby</id>
<phase>test</phase>
<goals>
<goal>build-classpath</goal>
</goals>
<configuration>
<outputFile>${project.parent.basedir}/target/cached_classpath_jruby.txt</outputFile>
<includeArtifactIds>jruby-complete</includeArtifactIds>
</configuration>
</execution>
<!--
Build an aggregation of our templated NOTICE file and the NOTICE files in our dependencies.
If MASSEMBLY-382 is fixed we could do this in the assembly
Currently relies on env, bash, find, and cat.
-->
<execution>
<!-- put all of the NOTICE files out of our dependencies -->
<id>unpack-dependency-notices</id>
<phase>prepare-package</phase>
<goals>
<goal>unpack-dependencies</goal>
</goals>
<configuration>
<excludeTypes>pom</excludeTypes>
<useSubDirectoryPerArtifact>true</useSubDirectoryPerArtifact>
<includes>**\/NOTICE,**\/NOTICE.txt</includes>
</configuration>
</execution>
</executions>
</plugin>
<plugin>
<groupId>org.codehaus.mojo</groupId>
<artifactId>exec-maven-plugin</artifactId>
<version>${exec.maven.version}</version>
<executions>
<execution>
<id>concat-NOTICE-files</id>
<phase>package</phase>
<goals>
<goal>exec</goal>
</goals>
<configuration>
<executable>env</executable>
<arguments>
<argument>bash</argument>
<argument>-c</argument>
<argument>cat maven-shared-archive-resources/META-INF/NOTICE \
`find ${project.build.directory}/dependency -iname NOTICE -or -iname NOTICE.txt`
</argument>
</arguments>
<outputFile>${project.build.directory}/NOTICE.aggregate</outputFile>
<workingDirectory>${project.build.directory}</workingDirectory>
</configuration>
</execution>
</executions>
</plugin>
<!-- /end building aggregation of NOTICE files -->
</plugins>
</build>
<dependencies>
<!-- client artifacts for downstream use -->
<dependency>
@ -189,7 +47,7 @@
<groupId>org.apache.hbase</groupId>
<artifactId>hbase-shaded-mapreduce</artifactId>
</dependency>
<!-- Intra-project dependencies -->
<!-- Intra-project dependencies -->
<dependency>
<groupId>org.apache.hbase</groupId>
<artifactId>hbase-it</artifactId>
@ -254,25 +112,25 @@
<artifactId>hbase-external-blockcache</artifactId>
</dependency>
<dependency>
<groupId>org.apache.hbase</groupId>
<artifactId>hbase-testing-util</artifactId>
<groupId>org.apache.hbase</groupId>
<artifactId>hbase-testing-util</artifactId>
</dependency>
<dependency>
<groupId>org.apache.hbase</groupId>
<artifactId>hbase-metrics-api</artifactId>
<groupId>org.apache.hbase</groupId>
<artifactId>hbase-metrics-api</artifactId>
</dependency>
<dependency>
<groupId>org.apache.hbase</groupId>
<artifactId>hbase-metrics</artifactId>
<groupId>org.apache.hbase</groupId>
<artifactId>hbase-metrics</artifactId>
</dependency>
<dependency>
<groupId>org.apache.hbase</groupId>
<artifactId>hbase-protocol-shaded</artifactId>
</dependency>
<dependency>
<groupId>org.apache.hbase</groupId>
<artifactId>hbase-resource-bundle</artifactId>
<optional>true</optional>
<groupId>org.apache.hbase</groupId>
<artifactId>hbase-resource-bundle</artifactId>
<optional>true</optional>
</dependency>
<dependency>
<groupId>org.apache.httpcomponents</groupId>
@ -390,4 +248,143 @@
<scope>compile</scope>
</dependency>
</dependencies>
<build>
<plugins>
<!-- licensing info from our dependencies -->
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-remote-resources-plugin</artifactId>
<executions>
<execution>
<id>aggregate-licenses</id>
<goals>
<goal>process</goal>
</goals>
<configuration>
<properties>
<copyright-end-year>${build.year}</copyright-end-year>
<debug-print-included-work-info>${license.debug.print.included}</debug-print-included-work-info>
<bundled-dependencies>${license.bundles.dependencies}</bundled-dependencies>
<bundled-jquery>${license.bundles.jquery}</bundled-jquery>
<bundled-vega>${license.bundles.vega}</bundled-vega>
<bundled-logo>${license.bundles.logo}</bundled-logo>
<bundled-bootstrap>${license.bundles.bootstrap}</bundled-bootstrap>
</properties>
<resourceBundles>
<resourceBundle>${project.groupId}:hbase-resource-bundle:${project.version}</resourceBundle>
</resourceBundles>
<supplementalModelArtifacts>
<supplementalModelArtifact>${project.groupId}:hbase-resource-bundle:${project.version}</supplementalModelArtifact>
</supplementalModelArtifacts>
<supplementalModels>
<supplementalModel>supplemental-models.xml</supplementalModel>
</supplementalModels>
</configuration>
</execution>
</executions>
</plugin>
<plugin>
<artifactId>maven-assembly-plugin</artifactId>
<configuration>
<!--Else will use hbase-assembly as final name.-->
<finalName>hbase-${project.version}</finalName>
<skipAssembly>false</skipAssembly>
<appendAssemblyId>true</appendAssemblyId>
<tarLongFileMode>posix</tarLongFileMode>
<descriptors>
<descriptor>${assembly.file}</descriptor>
<descriptor>src/main/assembly/client.xml</descriptor>
</descriptors>
</configuration>
</plugin>
<plugin>
<artifactId>maven-dependency-plugin</artifactId>
<executions>
<execution>
<!-- generates the file that will be used by the bin/hbase script in the dev env -->
<id>create-hbase-generated-classpath</id>
<goals>
<goal>build-classpath</goal>
</goals>
<phase>test</phase>
<configuration>
<outputFile>${project.parent.basedir}/target/cached_classpath.txt</outputFile>
<excludeArtifactIds>jline,jruby-complete,hbase-shaded-client,hbase-shaded-client-byo-hadoop,hbase-shaded-mapreduce</excludeArtifactIds>
</configuration>
</execution>
<execution>
<!-- generates the file that will be used by the bin/hbase zkcli script in the dev env -->
<id>create-hbase-generated-classpath-jline</id>
<goals>
<goal>build-classpath</goal>
</goals>
<phase>test</phase>
<configuration>
<outputFile>${project.parent.basedir}/target/cached_classpath_jline.txt</outputFile>
<includeArtifactIds>jline</includeArtifactIds>
</configuration>
</execution>
<execution>
<!-- generates the file that will be used by the bin/hbase shell script in the dev env -->
<id>create-hbase-generated-classpath-jruby</id>
<goals>
<goal>build-classpath</goal>
</goals>
<phase>test</phase>
<configuration>
<outputFile>${project.parent.basedir}/target/cached_classpath_jruby.txt</outputFile>
<includeArtifactIds>jruby-complete</includeArtifactIds>
</configuration>
</execution>
<!--
Build an aggregation of our templated NOTICE file and the NOTICE files in our dependencies.
If MASSEMBLY-382 is fixed we could do this in the assembly
Currently relies on env, bash, find, and cat.
-->
<execution>
<!-- put all of the NOTICE files out of our dependencies -->
<id>unpack-dependency-notices</id>
<goals>
<goal>unpack-dependencies</goal>
</goals>
<phase>prepare-package</phase>
<configuration>
<excludeTypes>pom</excludeTypes>
<useSubDirectoryPerArtifact>true</useSubDirectoryPerArtifact>
<includes>**\/NOTICE,**\/NOTICE.txt</includes>
</configuration>
</execution>
</executions>
</plugin>
<plugin>
<groupId>org.codehaus.mojo</groupId>
<artifactId>exec-maven-plugin</artifactId>
<version>${exec.maven.version}</version>
<executions>
<execution>
<id>concat-NOTICE-files</id>
<goals>
<goal>exec</goal>
</goals>
<phase>package</phase>
<configuration>
<executable>env</executable>
<arguments>
<argument>bash</argument>
<argument>-c</argument>
<argument>cat maven-shared-archive-resources/META-INF/NOTICE \
`find ${project.build.directory}/dependency -iname NOTICE -or -iname NOTICE.txt`</argument>
</arguments>
<outputFile>${project.build.directory}/NOTICE.aggregate</outputFile>
<workingDirectory>${project.build.directory}</workingDirectory>
</configuration>
</execution>
</executions>
</plugin>
<!-- /end building aggregation of NOTICE files -->
</plugins>
</build>
</project>

View File

@ -1,6 +1,5 @@
<?xml version="1.0"?>
<project xmlns="https://maven.apache.org/POM/4.0.0" xmlns:xsi="https://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="https://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd">
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="https://maven.apache.org/POM/4.0.0" xmlns:xsi="https://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="https://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd">
<!--
/**
* Licensed to the Apache Software Foundation (ASF) under one
@ -22,8 +21,8 @@
-->
<modelVersion>4.0.0</modelVersion>
<parent>
<artifactId>hbase-build-configuration</artifactId>
<groupId>org.apache.hbase</groupId>
<artifactId>hbase-build-configuration</artifactId>
<version>3.0.0-alpha-3-SNAPSHOT</version>
<relativePath>../hbase-build-configuration</relativePath>
</parent>
@ -31,33 +30,6 @@
<artifactId>hbase-asyncfs</artifactId>
<name>Apache HBase - Asynchronous FileSystem</name>
<description>HBase Asynchronous FileSystem Implementation for WAL</description>
<build>
<plugins>
<!-- Make a jar and put the sources in the jar -->
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-source-plugin</artifactId>
</plugin>
<plugin>
<!--Make it so assembly:single does nothing in here-->
<artifactId>maven-assembly-plugin</artifactId>
<configuration>
<skipAssembly>true</skipAssembly>
</configuration>
</plugin>
<plugin>
<groupId>net.revelc.code</groupId>
<artifactId>warbucks-maven-plugin</artifactId>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-checkstyle-plugin</artifactId>
<configuration>
<failOnViolation>true</failOnViolation>
</configuration>
</plugin>
</plugins>
</build>
<dependencies>
<dependency>
@ -169,13 +141,42 @@
<scope>test</scope>
</dependency>
</dependencies>
<build>
<plugins>
<!-- Make a jar and put the sources in the jar -->
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-source-plugin</artifactId>
</plugin>
<plugin>
<!--Make it so assembly:single does nothing in here-->
<artifactId>maven-assembly-plugin</artifactId>
<configuration>
<skipAssembly>true</skipAssembly>
</configuration>
</plugin>
<plugin>
<groupId>net.revelc.code</groupId>
<artifactId>warbucks-maven-plugin</artifactId>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-checkstyle-plugin</artifactId>
<configuration>
<failOnViolation>true</failOnViolation>
</configuration>
</plugin>
</plugins>
</build>
<profiles>
<!-- Profiles for building against different hadoop versions -->
<profile>
<id>hadoop-3.0</id>
<activation>
<property><name>!hadoop.profile</name></property>
<property>
<name>!hadoop.profile</name>
</property>
</activation>
<dependencies>
<dependency>
@ -224,8 +225,7 @@
<artifactId>lifecycle-mapping</artifactId>
<configuration>
<lifecycleMappingMetadata>
<pluginExecutions>
</pluginExecutions>
<pluginExecutions/>
</lifecycleMappingMetadata>
</configuration>
</plugin>

View File

@ -1,4 +1,4 @@
/**
/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
@ -21,10 +21,9 @@ import java.io.Closeable;
import java.io.IOException;
import java.nio.ByteBuffer;
import java.util.concurrent.CompletableFuture;
import org.apache.yetus.audience.InterfaceAudience;
import org.apache.hadoop.hbase.util.CancelableProgressable;
import org.apache.hadoop.hdfs.protocol.DatanodeInfo;
import org.apache.yetus.audience.InterfaceAudience;
/**
* Interface for asynchronous filesystem output stream.

View File

@ -1,4 +1,4 @@
/**
/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
@ -47,9 +47,9 @@ public final class AsyncFSOutputHelper {
* implementation for other {@link FileSystem} which wraps around a {@link FSDataOutputStream}.
*/
public static AsyncFSOutput createOutput(FileSystem fs, Path f, boolean overwrite,
boolean createParent, short replication, long blockSize, EventLoopGroup eventLoopGroup,
Class<? extends Channel> channelClass, StreamSlowMonitor monitor)
throws IOException, CommonFSUtils.StreamLacksCapabilityException {
boolean createParent, short replication, long blockSize, EventLoopGroup eventLoopGroup,
Class<? extends Channel> channelClass, StreamSlowMonitor monitor)
throws IOException, CommonFSUtils.StreamLacksCapabilityException {
if (fs instanceof DistributedFileSystem) {
return FanOutOneBlockAsyncDFSOutputHelper.createOutput((DistributedFileSystem) fs, f,
overwrite, createParent, replication, blockSize, eventLoopGroup, channelClass, monitor);

View File

@ -1,4 +1,4 @@
/**
/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
@ -180,7 +180,10 @@ public class FanOutOneBlockAsyncDFSOutput implements AsyncFSOutput {
// State for connections to DN
private enum State {
STREAMING, CLOSING, BROKEN, CLOSED
STREAMING,
CLOSING,
BROKEN,
CLOSED
}
private volatile State state;
@ -196,7 +199,7 @@ public class FanOutOneBlockAsyncDFSOutput implements AsyncFSOutput {
if (c.unfinishedReplicas.remove(channel.id())) {
long current = EnvironmentEdgeManager.currentTime();
streamSlowMonitor.checkProcessTimeAndSpeed(datanodeInfoMap.get(channel), c.packetDataLen,
current - c.flushTimestamp, c.lastAckTimestamp, c.unfinishedReplicas.size());
current - c.flushTimestamp, c.lastAckTimestamp, c.unfinishedReplicas.size());
c.lastAckTimestamp = current;
if (c.unfinishedReplicas.isEmpty()) {
// we need to remove first before complete the future. It is possible that after we
@ -284,13 +287,13 @@ public class FanOutOneBlockAsyncDFSOutput implements AsyncFSOutput {
protected void channelRead0(ChannelHandlerContext ctx, PipelineAckProto ack) throws Exception {
Status reply = getStatus(ack);
if (reply != Status.SUCCESS) {
failed(ctx.channel(), () -> new IOException("Bad response " + reply + " for block " +
block + " from datanode " + ctx.channel().remoteAddress()));
failed(ctx.channel(), () -> new IOException("Bad response " + reply + " for block " + block
+ " from datanode " + ctx.channel().remoteAddress()));
return;
}
if (PipelineAck.isRestartOOBStatus(reply)) {
failed(ctx.channel(), () -> new IOException("Restart response " + reply + " for block " +
block + " from datanode " + ctx.channel().remoteAddress()));
failed(ctx.channel(), () -> new IOException("Restart response " + reply + " for block "
+ block + " from datanode " + ctx.channel().remoteAddress()));
return;
}
if (ack.getSeqno() == HEART_BEAT_SEQNO) {
@ -345,10 +348,10 @@ public class FanOutOneBlockAsyncDFSOutput implements AsyncFSOutput {
}
}
FanOutOneBlockAsyncDFSOutput(Configuration conf,DistributedFileSystem dfs,
DFSClient client, ClientProtocol namenode, String clientName, String src, long fileId,
LocatedBlock locatedBlock, Encryptor encryptor, Map<Channel, DatanodeInfo> datanodeInfoMap,
DataChecksum summer, ByteBufAllocator alloc, StreamSlowMonitor streamSlowMonitor) {
FanOutOneBlockAsyncDFSOutput(Configuration conf, DistributedFileSystem dfs, DFSClient client,
ClientProtocol namenode, String clientName, String src, long fileId, LocatedBlock locatedBlock,
Encryptor encryptor, Map<Channel, DatanodeInfo> datanodeInfoMap, DataChecksum summer,
ByteBufAllocator alloc, StreamSlowMonitor streamSlowMonitor) {
this.conf = conf;
this.dfs = dfs;
this.client = client;
@ -403,7 +406,7 @@ public class FanOutOneBlockAsyncDFSOutput implements AsyncFSOutput {
}
private void flushBuffer(CompletableFuture<Long> future, ByteBuf dataBuf,
long nextPacketOffsetInBlock, boolean syncBlock) {
long nextPacketOffsetInBlock, boolean syncBlock) {
int dataLen = dataBuf.readableBytes();
int chunkLen = summer.getBytesPerChecksum();
int trailingPartialChunkLen = dataLen % chunkLen;
@ -413,13 +416,13 @@ public class FanOutOneBlockAsyncDFSOutput implements AsyncFSOutput {
summer.calculateChunkedSums(dataBuf.nioBuffer(), checksumBuf.nioBuffer(0, checksumLen));
checksumBuf.writerIndex(checksumLen);
PacketHeader header = new PacketHeader(4 + checksumLen + dataLen, nextPacketOffsetInBlock,
nextPacketSeqno, false, dataLen, syncBlock);
nextPacketSeqno, false, dataLen, syncBlock);
int headerLen = header.getSerializedSize();
ByteBuf headerBuf = alloc.buffer(headerLen);
header.putInBuffer(headerBuf.nioBuffer(0, headerLen));
headerBuf.writerIndex(headerLen);
Callback c = new Callback(future, nextPacketOffsetInBlock + dataLen,
datanodeInfoMap.keySet(), dataLen);
Callback c =
new Callback(future, nextPacketOffsetInBlock + dataLen, datanodeInfoMap.keySet(), dataLen);
waitingAckQueue.addLast(c);
// recheck again after we pushed the callback to queue
if (state != State.STREAMING && waitingAckQueue.peekFirst() == c) {
@ -429,7 +432,7 @@ public class FanOutOneBlockAsyncDFSOutput implements AsyncFSOutput {
return;
}
// TODO: we should perhaps measure time taken per DN here;
// we could collect statistics per DN, and/or exclude bad nodes in createOutput.
// we could collect statistics per DN, and/or exclude bad nodes in createOutput.
datanodeInfoMap.keySet().forEach(ch -> {
ch.write(headerBuf.retainedDuplicate());
ch.write(checksumBuf.retainedDuplicate());
@ -514,7 +517,7 @@ public class FanOutOneBlockAsyncDFSOutput implements AsyncFSOutput {
}
trailingPartialChunkLength = dataLen % summer.getBytesPerChecksum();
ByteBuf newBuf = alloc.directBuffer(sendBufSizePRedictor.guess(dataLen))
.ensureWritable(trailingPartialChunkLength);
.ensureWritable(trailingPartialChunkLength);
if (trailingPartialChunkLength != 0) {
buf.readerIndex(dataLen - trailingPartialChunkLength).readBytes(newBuf,
trailingPartialChunkLength);

View File

@ -1,4 +1,4 @@
/**
/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
@ -116,7 +116,7 @@ import org.apache.hbase.thirdparty.io.netty.util.concurrent.Promise;
@InterfaceAudience.Private
public final class FanOutOneBlockAsyncDFSOutputHelper {
private static final Logger LOG =
LoggerFactory.getLogger(FanOutOneBlockAsyncDFSOutputHelper.class);
LoggerFactory.getLogger(FanOutOneBlockAsyncDFSOutputHelper.class);
private FanOutOneBlockAsyncDFSOutputHelper() {
}
@ -145,9 +145,8 @@ public final class FanOutOneBlockAsyncDFSOutputHelper {
// helper class for creating files.
private interface FileCreator {
default HdfsFileStatus create(ClientProtocol instance, String src, FsPermission masked,
String clientName, EnumSetWritable<CreateFlag> flag, boolean createParent,
short replication, long blockSize, CryptoProtocolVersion[] supportedVersions)
throws Exception {
String clientName, EnumSetWritable<CreateFlag> flag, boolean createParent, short replication,
long blockSize, CryptoProtocolVersion[] supportedVersions) throws Exception {
try {
return (HdfsFileStatus) createObject(instance, src, masked, clientName, flag, createParent,
replication, blockSize, supportedVersions);
@ -161,15 +160,15 @@ public final class FanOutOneBlockAsyncDFSOutputHelper {
}
Object createObject(ClientProtocol instance, String src, FsPermission masked, String clientName,
EnumSetWritable<CreateFlag> flag, boolean createParent, short replication, long blockSize,
CryptoProtocolVersion[] supportedVersions) throws Exception;
EnumSetWritable<CreateFlag> flag, boolean createParent, short replication, long blockSize,
CryptoProtocolVersion[] supportedVersions) throws Exception;
}
private static final FileCreator FILE_CREATOR;
private static LeaseManager createLeaseManager() throws NoSuchMethodException {
Method beginFileLeaseMethod =
DFSClient.class.getDeclaredMethod("beginFileLease", long.class, DFSOutputStream.class);
DFSClient.class.getDeclaredMethod("beginFileLease", long.class, DFSOutputStream.class);
beginFileLeaseMethod.setAccessible(true);
Method endFileLeaseMethod = DFSClient.class.getDeclaredMethod("endFileLease", long.class);
endFileLeaseMethod.setAccessible(true);
@ -197,13 +196,13 @@ public final class FanOutOneBlockAsyncDFSOutputHelper {
private static FileCreator createFileCreator3_3() throws NoSuchMethodException {
Method createMethod = ClientProtocol.class.getMethod("create", String.class, FsPermission.class,
String.class, EnumSetWritable.class, boolean.class, short.class, long.class,
CryptoProtocolVersion[].class, String.class, String.class);
String.class, EnumSetWritable.class, boolean.class, short.class, long.class,
CryptoProtocolVersion[].class, String.class, String.class);
return (instance, src, masked, clientName, flag, createParent, replication, blockSize,
supportedVersions) -> {
supportedVersions) -> {
return (HdfsFileStatus) createMethod.invoke(instance, src, masked, clientName, flag,
createParent, replication, blockSize, supportedVersions, null, null);
createParent, replication, blockSize, supportedVersions, null, null);
};
}
@ -213,7 +212,7 @@ public final class FanOutOneBlockAsyncDFSOutputHelper {
CryptoProtocolVersion[].class, String.class);
return (instance, src, masked, clientName, flag, createParent, replication, blockSize,
supportedVersions) -> {
supportedVersions) -> {
return (HdfsFileStatus) createMethod.invoke(instance, src, masked, clientName, flag,
createParent, replication, blockSize, supportedVersions, null);
};
@ -249,9 +248,9 @@ public final class FanOutOneBlockAsyncDFSOutputHelper {
LEASE_MANAGER = createLeaseManager();
FILE_CREATOR = createFileCreator();
} catch (Exception e) {
String msg = "Couldn't properly initialize access to HDFS internals. Please " +
"update your WAL Provider to not make use of the 'asyncfs' provider. See " +
"HBASE-16110 for more information.";
String msg = "Couldn't properly initialize access to HDFS internals. Please "
+ "update your WAL Provider to not make use of the 'asyncfs' provider. See "
+ "HBASE-16110 for more information.";
LOG.error(msg, e);
throw new Error(msg, e);
}
@ -282,7 +281,7 @@ public final class FanOutOneBlockAsyncDFSOutputHelper {
}
private static void processWriteBlockResponse(Channel channel, DatanodeInfo dnInfo,
Promise<Channel> promise, int timeoutMs) {
Promise<Channel> promise, int timeoutMs) {
channel.pipeline().addLast(new IdleStateHandler(timeoutMs, 0, 0, TimeUnit.MILLISECONDS),
new ProtobufVarint32FrameDecoder(),
new ProtobufDecoder(BlockOpResponseProto.getDefaultInstance()),
@ -290,7 +289,7 @@ public final class FanOutOneBlockAsyncDFSOutputHelper {
@Override
protected void channelRead0(ChannelHandlerContext ctx, BlockOpResponseProto resp)
throws Exception {
throws Exception {
Status pipelineStatus = resp.getStatus();
if (PipelineAck.isRestartOOBStatus(pipelineStatus)) {
throw new IOException("datanode " + dnInfo + " is restarting");
@ -298,11 +297,11 @@ public final class FanOutOneBlockAsyncDFSOutputHelper {
String logInfo = "ack with firstBadLink as " + resp.getFirstBadLink();
if (resp.getStatus() != Status.SUCCESS) {
if (resp.getStatus() == Status.ERROR_ACCESS_TOKEN) {
throw new InvalidBlockTokenException("Got access token error" + ", status message " +
resp.getMessage() + ", " + logInfo);
throw new InvalidBlockTokenException("Got access token error" + ", status message "
+ resp.getMessage() + ", " + logInfo);
} else {
throw new IOException("Got error" + ", status=" + resp.getStatus().name() +
", status message " + resp.getMessage() + ", " + logInfo);
throw new IOException("Got error" + ", status=" + resp.getStatus().name()
+ ", status message " + resp.getMessage() + ", " + logInfo);
}
}
// success
@ -329,7 +328,7 @@ public final class FanOutOneBlockAsyncDFSOutputHelper {
public void userEventTriggered(ChannelHandlerContext ctx, Object evt) throws Exception {
if (evt instanceof IdleStateEvent && ((IdleStateEvent) evt).state() == READER_IDLE) {
promise
.tryFailure(new IOException("Timeout(" + timeoutMs + "ms) waiting for response"));
.tryFailure(new IOException("Timeout(" + timeoutMs + "ms) waiting for response"));
} else {
super.userEventTriggered(ctx, evt);
}
@ -343,7 +342,7 @@ public final class FanOutOneBlockAsyncDFSOutputHelper {
}
private static void requestWriteBlock(Channel channel, StorageType storageType,
OpWriteBlockProto.Builder writeBlockProtoBuilder) throws IOException {
OpWriteBlockProto.Builder writeBlockProtoBuilder) throws IOException {
OpWriteBlockProto proto =
writeBlockProtoBuilder.setStorageType(PBHelperClient.convertStorageType(storageType)).build();
int protoLen = proto.getSerializedSize();
@ -356,9 +355,9 @@ public final class FanOutOneBlockAsyncDFSOutputHelper {
}
private static void initialize(Configuration conf, Channel channel, DatanodeInfo dnInfo,
StorageType storageType, OpWriteBlockProto.Builder writeBlockProtoBuilder, int timeoutMs,
DFSClient client, Token<BlockTokenIdentifier> accessToken, Promise<Channel> promise)
throws IOException {
StorageType storageType, OpWriteBlockProto.Builder writeBlockProtoBuilder, int timeoutMs,
DFSClient client, Token<BlockTokenIdentifier> accessToken, Promise<Channel> promise)
throws IOException {
Promise<Void> saslPromise = channel.eventLoop().newPromise();
trySaslNegotiate(conf, channel, dnInfo, timeoutMs, client, accessToken, saslPromise);
saslPromise.addListener(new FutureListener<Void>() {
@ -377,13 +376,13 @@ public final class FanOutOneBlockAsyncDFSOutputHelper {
}
private static List<Future<Channel>> connectToDataNodes(Configuration conf, DFSClient client,
String clientName, LocatedBlock locatedBlock, long maxBytesRcvd, long latestGS,
BlockConstructionStage stage, DataChecksum summer, EventLoopGroup eventLoopGroup,
Class<? extends Channel> channelClass) {
String clientName, LocatedBlock locatedBlock, long maxBytesRcvd, long latestGS,
BlockConstructionStage stage, DataChecksum summer, EventLoopGroup eventLoopGroup,
Class<? extends Channel> channelClass) {
StorageType[] storageTypes = locatedBlock.getStorageTypes();
DatanodeInfo[] datanodeInfos = locatedBlock.getLocations();
boolean connectToDnViaHostname =
conf.getBoolean(DFS_CLIENT_USE_DN_HOSTNAME, DFS_CLIENT_USE_DN_HOSTNAME_DEFAULT);
conf.getBoolean(DFS_CLIENT_USE_DN_HOSTNAME, DFS_CLIENT_USE_DN_HOSTNAME_DEFAULT);
int timeoutMs = conf.getInt(DFS_CLIENT_SOCKET_TIMEOUT_KEY, READ_TIMEOUT);
ExtendedBlock blockCopy = new ExtendedBlock(locatedBlock.getBlock());
blockCopy.setNumBytes(locatedBlock.getBlockSize());
@ -392,11 +391,11 @@ public final class FanOutOneBlockAsyncDFSOutputHelper {
.setToken(PBHelperClient.convert(locatedBlock.getBlockToken())))
.setClientName(clientName).build();
ChecksumProto checksumProto = DataTransferProtoUtil.toProto(summer);
OpWriteBlockProto.Builder writeBlockProtoBuilder = OpWriteBlockProto.newBuilder()
.setHeader(header).setStage(OpWriteBlockProto.BlockConstructionStage.valueOf(stage.name()))
.setPipelineSize(1).setMinBytesRcvd(locatedBlock.getBlock().getNumBytes())
.setMaxBytesRcvd(maxBytesRcvd).setLatestGenerationStamp(latestGS)
.setRequestedChecksum(checksumProto)
OpWriteBlockProto.Builder writeBlockProtoBuilder =
OpWriteBlockProto.newBuilder().setHeader(header)
.setStage(OpWriteBlockProto.BlockConstructionStage.valueOf(stage.name())).setPipelineSize(1)
.setMinBytesRcvd(locatedBlock.getBlock().getNumBytes()).setMaxBytesRcvd(maxBytesRcvd)
.setLatestGenerationStamp(latestGS).setRequestedChecksum(checksumProto)
.setCachingStrategy(CachingStrategyProto.newBuilder().setDropBehind(true).build());
List<Future<Channel>> futureList = new ArrayList<>(datanodeInfos.length);
for (int i = 0; i < datanodeInfos.length; i++) {
@ -406,26 +405,26 @@ public final class FanOutOneBlockAsyncDFSOutputHelper {
futureList.add(promise);
String dnAddr = dnInfo.getXferAddr(connectToDnViaHostname);
new Bootstrap().group(eventLoopGroup).channel(channelClass)
.option(CONNECT_TIMEOUT_MILLIS, timeoutMs).handler(new ChannelInitializer<Channel>() {
.option(CONNECT_TIMEOUT_MILLIS, timeoutMs).handler(new ChannelInitializer<Channel>() {
@Override
protected void initChannel(Channel ch) throws Exception {
// we need to get the remote address of the channel so we can only move on after
// channel connected. Leave an empty implementation here because netty does not allow
// a null handler.
}
}).connect(NetUtils.createSocketAddr(dnAddr)).addListener(new ChannelFutureListener() {
@Override
protected void initChannel(Channel ch) throws Exception {
// we need to get the remote address of the channel so we can only move on after
// channel connected. Leave an empty implementation here because netty does not allow
// a null handler.
}
}).connect(NetUtils.createSocketAddr(dnAddr)).addListener(new ChannelFutureListener() {
@Override
public void operationComplete(ChannelFuture future) throws Exception {
if (future.isSuccess()) {
initialize(conf, future.channel(), dnInfo, storageType, writeBlockProtoBuilder,
timeoutMs, client, locatedBlock.getBlockToken(), promise);
} else {
promise.tryFailure(future.cause());
}
@Override
public void operationComplete(ChannelFuture future) throws Exception {
if (future.isSuccess()) {
initialize(conf, future.channel(), dnInfo, storageType, writeBlockProtoBuilder,
timeoutMs, client, locatedBlock.getBlockToken(), promise);
} else {
promise.tryFailure(future.cause());
}
});
}
});
}
return futureList;
}
@ -453,21 +452,21 @@ public final class FanOutOneBlockAsyncDFSOutputHelper {
}
private static FanOutOneBlockAsyncDFSOutput createOutput(DistributedFileSystem dfs, String src,
boolean overwrite, boolean createParent, short replication, long blockSize,
EventLoopGroup eventLoopGroup, Class<? extends Channel> channelClass,
StreamSlowMonitor monitor) throws IOException {
boolean overwrite, boolean createParent, short replication, long blockSize,
EventLoopGroup eventLoopGroup, Class<? extends Channel> channelClass, StreamSlowMonitor monitor)
throws IOException {
Configuration conf = dfs.getConf();
DFSClient client = dfs.getClient();
String clientName = client.getClientName();
ClientProtocol namenode = client.getNamenode();
int createMaxRetries = conf.getInt(ASYNC_DFS_OUTPUT_CREATE_MAX_RETRIES,
DEFAULT_ASYNC_DFS_OUTPUT_CREATE_MAX_RETRIES);
int createMaxRetries =
conf.getInt(ASYNC_DFS_OUTPUT_CREATE_MAX_RETRIES, DEFAULT_ASYNC_DFS_OUTPUT_CREATE_MAX_RETRIES);
ExcludeDatanodeManager excludeDatanodeManager = monitor.getExcludeDatanodeManager();
Set<DatanodeInfo> toExcludeNodes =
new HashSet<>(excludeDatanodeManager.getExcludeDNs().keySet());
for (int retry = 0;; retry++) {
LOG.debug("When create output stream for {}, exclude list is {}, retry={}", src,
toExcludeNodes, retry);
toExcludeNodes, retry);
HdfsFileStatus stat;
try {
stat = FILE_CREATOR.create(namenode, src,
@ -556,14 +555,14 @@ public final class FanOutOneBlockAsyncDFSOutputHelper {
* inside an {@link EventLoop}.
*/
public static FanOutOneBlockAsyncDFSOutput createOutput(DistributedFileSystem dfs, Path f,
boolean overwrite, boolean createParent, short replication, long blockSize,
EventLoopGroup eventLoopGroup, Class<? extends Channel> channelClass,
final StreamSlowMonitor monitor) throws IOException {
boolean overwrite, boolean createParent, short replication, long blockSize,
EventLoopGroup eventLoopGroup, Class<? extends Channel> channelClass,
final StreamSlowMonitor monitor) throws IOException {
return new FileSystemLinkResolver<FanOutOneBlockAsyncDFSOutput>() {
@Override
public FanOutOneBlockAsyncDFSOutput doCall(Path p)
throws IOException, UnresolvedLinkException {
throws IOException, UnresolvedLinkException {
return createOutput(dfs, p.toUri().getPath(), overwrite, createParent, replication,
blockSize, eventLoopGroup, channelClass, monitor);
}
@ -583,7 +582,7 @@ public final class FanOutOneBlockAsyncDFSOutputHelper {
}
static void completeFile(DFSClient client, ClientProtocol namenode, String src, String clientName,
ExtendedBlock block, long fileId) {
ExtendedBlock block, long fileId) {
for (int retry = 0;; retry++) {
try {
if (namenode.complete(src, clientName, block, fileId)) {

View File

@ -1,4 +1,4 @@
/**
/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
@ -104,7 +104,7 @@ import org.apache.hbase.thirdparty.io.netty.util.concurrent.Promise;
@InterfaceAudience.Private
public final class FanOutOneBlockAsyncDFSOutputSaslHelper {
private static final Logger LOG =
LoggerFactory.getLogger(FanOutOneBlockAsyncDFSOutputSaslHelper.class);
LoggerFactory.getLogger(FanOutOneBlockAsyncDFSOutputSaslHelper.class);
private FanOutOneBlockAsyncDFSOutputSaslHelper() {
}
@ -129,21 +129,21 @@ public final class FanOutOneBlockAsyncDFSOutputSaslHelper {
private interface TransparentCryptoHelper {
Encryptor createEncryptor(Configuration conf, FileEncryptionInfo feInfo, DFSClient client)
throws IOException;
throws IOException;
}
private static final TransparentCryptoHelper TRANSPARENT_CRYPTO_HELPER;
private static SaslAdaptor createSaslAdaptor()
throws NoSuchFieldException, NoSuchMethodException {
throws NoSuchFieldException, NoSuchMethodException {
Field saslPropsResolverField =
SaslDataTransferClient.class.getDeclaredField("saslPropsResolver");
SaslDataTransferClient.class.getDeclaredField("saslPropsResolver");
saslPropsResolverField.setAccessible(true);
Field trustedChannelResolverField =
SaslDataTransferClient.class.getDeclaredField("trustedChannelResolver");
SaslDataTransferClient.class.getDeclaredField("trustedChannelResolver");
trustedChannelResolverField.setAccessible(true);
Field fallbackToSimpleAuthField =
SaslDataTransferClient.class.getDeclaredField("fallbackToSimpleAuth");
SaslDataTransferClient.class.getDeclaredField("fallbackToSimpleAuth");
fallbackToSimpleAuthField.setAccessible(true);
return new SaslAdaptor() {
@ -177,7 +177,7 @@ public final class FanOutOneBlockAsyncDFSOutputSaslHelper {
}
private static TransparentCryptoHelper createTransparentCryptoHelperWithoutHDFS12396()
throws NoSuchMethodException {
throws NoSuchMethodException {
Method decryptEncryptedDataEncryptionKeyMethod = DFSClient.class
.getDeclaredMethod("decryptEncryptedDataEncryptionKey", FileEncryptionInfo.class);
decryptEncryptedDataEncryptionKeyMethod.setAccessible(true);
@ -185,7 +185,7 @@ public final class FanOutOneBlockAsyncDFSOutputSaslHelper {
@Override
public Encryptor createEncryptor(Configuration conf, FileEncryptionInfo feInfo,
DFSClient client) throws IOException {
DFSClient client) throws IOException {
try {
KeyVersion decryptedKey =
(KeyVersion) decryptEncryptedDataEncryptionKeyMethod.invoke(client, feInfo);
@ -206,7 +206,7 @@ public final class FanOutOneBlockAsyncDFSOutputSaslHelper {
}
private static TransparentCryptoHelper createTransparentCryptoHelperWithHDFS12396()
throws ClassNotFoundException, NoSuchMethodException {
throws ClassNotFoundException, NoSuchMethodException {
Class<?> hdfsKMSUtilCls = Class.forName("org.apache.hadoop.hdfs.HdfsKMSUtil");
Method decryptEncryptedDataEncryptionKeyMethod = hdfsKMSUtilCls.getDeclaredMethod(
"decryptEncryptedDataEncryptionKey", FileEncryptionInfo.class, KeyProvider.class);
@ -215,7 +215,7 @@ public final class FanOutOneBlockAsyncDFSOutputSaslHelper {
@Override
public Encryptor createEncryptor(Configuration conf, FileEncryptionInfo feInfo,
DFSClient client) throws IOException {
DFSClient client) throws IOException {
try {
KeyVersion decryptedKey = (KeyVersion) decryptEncryptedDataEncryptionKeyMethod
.invoke(null, feInfo, client.getKeyProvider());
@ -236,12 +236,12 @@ public final class FanOutOneBlockAsyncDFSOutputSaslHelper {
}
private static TransparentCryptoHelper createTransparentCryptoHelper()
throws NoSuchMethodException, ClassNotFoundException {
throws NoSuchMethodException, ClassNotFoundException {
try {
return createTransparentCryptoHelperWithoutHDFS12396();
} catch (NoSuchMethodException e) {
LOG.debug("No decryptEncryptedDataEncryptionKey method in DFSClient," +
" should be hadoop version with HDFS-12396", e);
LOG.debug("No decryptEncryptedDataEncryptionKey method in DFSClient,"
+ " should be hadoop version with HDFS-12396", e);
}
return createTransparentCryptoHelperWithHDFS12396();
}
@ -252,8 +252,8 @@ public final class FanOutOneBlockAsyncDFSOutputSaslHelper {
TRANSPARENT_CRYPTO_HELPER = createTransparentCryptoHelper();
} catch (Exception e) {
String msg = "Couldn't properly initialize access to HDFS internals. Please "
+ "update your WAL Provider to not make use of the 'asyncfs' provider. See "
+ "HBASE-16110 for more information.";
+ "update your WAL Provider to not make use of the 'asyncfs' provider. See "
+ "HBASE-16110 for more information.";
LOG.error(msg, e);
throw new Error(msg, e);
}
@ -324,8 +324,8 @@ public final class FanOutOneBlockAsyncDFSOutputSaslHelper {
private int step = 0;
public SaslNegotiateHandler(Configuration conf, String username, char[] password,
Map<String, String> saslProps, int timeoutMs, Promise<Void> promise,
DFSClient dfsClient) throws SaslException {
Map<String, String> saslProps, int timeoutMs, Promise<Void> promise, DFSClient dfsClient)
throws SaslException {
this.conf = conf;
this.saslProps = saslProps;
this.saslClient = Sasl.createSaslClient(new String[] { MECHANISM }, username, PROTOCOL,
@ -355,8 +355,8 @@ public final class FanOutOneBlockAsyncDFSOutputSaslHelper {
}
/**
* The asyncfs subsystem emulates a HDFS client by sending protobuf messages via netty.
* After Hadoop 3.3.0, the protobuf classes are relocated to org.apache.hadoop.thirdparty.protobuf.*.
* The asyncfs subsystem emulates a HDFS client by sending protobuf messages via netty. After
* Hadoop 3.3.0, the protobuf classes are relocated to org.apache.hadoop.thirdparty.protobuf.*.
* Use Reflection to check which ones to use.
*/
private static class BuilderPayloadSetter {
@ -366,13 +366,11 @@ public final class FanOutOneBlockAsyncDFSOutputSaslHelper {
/**
* Create a ByteString from byte array without copying (wrap), and then set it as the payload
* for the builder.
*
* @param builder builder for HDFS DataTransferEncryptorMessage.
* @param payload byte array of payload.
* @throws IOException
* @param payload byte array of payload. n
*/
static void wrapAndSetPayload(DataTransferEncryptorMessageProto.Builder builder, byte[] payload)
throws IOException {
static void wrapAndSetPayload(DataTransferEncryptorMessageProto.Builder builder,
byte[] payload) throws IOException {
Object byteStringObject;
try {
// byteStringObject = new LiteralByteString(payload);
@ -396,18 +394,18 @@ public final class FanOutOneBlockAsyncDFSOutputSaslHelper {
try {
// See if it can load the relocated ByteString, which comes from hadoop-thirdparty.
byteStringClass = Class.forName("org.apache.hadoop.thirdparty.protobuf.ByteString");
LOG.debug("Found relocated ByteString class from hadoop-thirdparty." +
" Assuming this is Hadoop 3.3.0+.");
LOG.debug("Found relocated ByteString class from hadoop-thirdparty."
+ " Assuming this is Hadoop 3.3.0+.");
} catch (ClassNotFoundException e) {
LOG.debug("Did not find relocated ByteString class from hadoop-thirdparty." +
" Assuming this is below Hadoop 3.3.0", e);
LOG.debug("Did not find relocated ByteString class from hadoop-thirdparty."
+ " Assuming this is below Hadoop 3.3.0", e);
}
// LiteralByteString is a package private class in protobuf. Make it accessible.
Class<?> literalByteStringClass;
try {
literalByteStringClass = Class.forName(
"org.apache.hadoop.thirdparty.protobuf.ByteString$LiteralByteString");
literalByteStringClass =
Class.forName("org.apache.hadoop.thirdparty.protobuf.ByteString$LiteralByteString");
LOG.debug("Shaded LiteralByteString from hadoop-thirdparty is found.");
} catch (ClassNotFoundException e) {
try {
@ -435,9 +433,9 @@ public final class FanOutOneBlockAsyncDFSOutputSaslHelper {
}
private void sendSaslMessage(ChannelHandlerContext ctx, byte[] payload,
List<CipherOption> options) throws IOException {
List<CipherOption> options) throws IOException {
DataTransferEncryptorMessageProto.Builder builder =
DataTransferEncryptorMessageProto.newBuilder();
DataTransferEncryptorMessageProto.newBuilder();
builder.setStatus(DataTransferEncryptorStatus.SUCCESS);
if (payload != null) {
BuilderPayloadSetter.wrapAndSetPayload(builder, payload);
@ -486,7 +484,7 @@ public final class FanOutOneBlockAsyncDFSOutputSaslHelper {
private boolean requestedQopContainsPrivacy() {
Set<String> requestedQop =
ImmutableSet.copyOf(Arrays.asList(saslProps.get(Sasl.QOP).split(",")));
ImmutableSet.copyOf(Arrays.asList(saslProps.get(Sasl.QOP).split(",")));
return requestedQop.contains("auth-conf");
}
@ -495,15 +493,14 @@ public final class FanOutOneBlockAsyncDFSOutputSaslHelper {
throw new IOException("Failed to complete SASL handshake");
}
Set<String> requestedQop =
ImmutableSet.copyOf(Arrays.asList(saslProps.get(Sasl.QOP).split(",")));
ImmutableSet.copyOf(Arrays.asList(saslProps.get(Sasl.QOP).split(",")));
String negotiatedQop = getNegotiatedQop();
LOG.debug(
"Verifying QOP, requested QOP = " + requestedQop + ", negotiated QOP = " + negotiatedQop);
if (!requestedQop.contains(negotiatedQop)) {
throw new IOException(String.format("SASL handshake completed, but "
+ "channel does not have acceptable quality of protection, "
+ "requested = %s, negotiated = %s",
requestedQop, negotiatedQop));
+ "channel does not have acceptable quality of protection, "
+ "requested = %s, negotiated = %s", requestedQop, negotiatedQop));
}
}
@ -522,13 +519,13 @@ public final class FanOutOneBlockAsyncDFSOutputSaslHelper {
outKey = saslClient.unwrap(outKey, 0, outKey.length);
}
return new CipherOption(option.getCipherSuite(), inKey, option.getInIv(), outKey,
option.getOutIv());
option.getOutIv());
}
private CipherOption getCipherOption(DataTransferEncryptorMessageProto proto,
boolean isNegotiatedQopPrivacy, SaslClient saslClient) throws IOException {
boolean isNegotiatedQopPrivacy, SaslClient saslClient) throws IOException {
List<CipherOption> cipherOptions =
PBHelperClient.convertCipherOptionProtos(proto.getCipherOptionList());
PBHelperClient.convertCipherOptionProtos(proto.getCipherOptionList());
if (cipherOptions == null || cipherOptions.isEmpty()) {
return null;
}
@ -558,7 +555,7 @@ public final class FanOutOneBlockAsyncDFSOutputSaslHelper {
assert response == null;
checkSaslComplete();
CipherOption cipherOption =
getCipherOption(proto, isNegotiatedQopPrivacy(), saslClient);
getCipherOption(proto, isNegotiatedQopPrivacy(), saslClient);
ChannelPipeline p = ctx.pipeline();
while (p.first() != null) {
p.removeFirst();
@ -639,7 +636,7 @@ public final class FanOutOneBlockAsyncDFSOutputSaslHelper {
@Override
public void write(ChannelHandlerContext ctx, Object msg, ChannelPromise promise)
throws Exception {
throws Exception {
if (msg instanceof ByteBuf) {
ByteBuf buf = (ByteBuf) msg;
cBuf.addComponent(buf);
@ -676,7 +673,7 @@ public final class FanOutOneBlockAsyncDFSOutputSaslHelper {
private final Decryptor decryptor;
public DecryptHandler(CryptoCodec codec, byte[] key, byte[] iv)
throws GeneralSecurityException, IOException {
throws GeneralSecurityException, IOException {
this.decryptor = codec.createDecryptor();
this.decryptor.init(key, Arrays.copyOf(iv, iv.length));
}
@ -709,14 +706,14 @@ public final class FanOutOneBlockAsyncDFSOutputSaslHelper {
private final Encryptor encryptor;
public EncryptHandler(CryptoCodec codec, byte[] key, byte[] iv)
throws GeneralSecurityException, IOException {
throws GeneralSecurityException, IOException {
this.encryptor = codec.createEncryptor();
this.encryptor.init(key, Arrays.copyOf(iv, iv.length));
}
@Override
protected ByteBuf allocateBuffer(ChannelHandlerContext ctx, ByteBuf msg, boolean preferDirect)
throws Exception {
throws Exception {
if (preferDirect) {
return ctx.alloc().directBuffer(msg.readableBytes());
} else {
@ -747,7 +744,7 @@ public final class FanOutOneBlockAsyncDFSOutputSaslHelper {
private static String getUserNameFromEncryptionKey(DataEncryptionKey encryptionKey) {
return encryptionKey.keyId + NAME_DELIMITER + encryptionKey.blockPoolId + NAME_DELIMITER
+ Base64.getEncoder().encodeToString(encryptionKey.nonce);
+ Base64.getEncoder().encodeToString(encryptionKey.nonce);
}
private static char[] encryptionKeyToPassword(byte[] encryptionKey) {
@ -771,26 +768,26 @@ public final class FanOutOneBlockAsyncDFSOutputSaslHelper {
}
private static void doSaslNegotiation(Configuration conf, Channel channel, int timeoutMs,
String username, char[] password, Map<String, String> saslProps, Promise<Void> saslPromise,
DFSClient dfsClient) {
String username, char[] password, Map<String, String> saslProps, Promise<Void> saslPromise,
DFSClient dfsClient) {
try {
channel.pipeline().addLast(new IdleStateHandler(timeoutMs, 0, 0, TimeUnit.MILLISECONDS),
new ProtobufVarint32FrameDecoder(),
new ProtobufDecoder(DataTransferEncryptorMessageProto.getDefaultInstance()),
new SaslNegotiateHandler(conf, username, password, saslProps, timeoutMs, saslPromise,
dfsClient));
dfsClient));
} catch (SaslException e) {
saslPromise.tryFailure(e);
}
}
static void trySaslNegotiate(Configuration conf, Channel channel, DatanodeInfo dnInfo,
int timeoutMs, DFSClient client, Token<BlockTokenIdentifier> accessToken,
Promise<Void> saslPromise) throws IOException {
int timeoutMs, DFSClient client, Token<BlockTokenIdentifier> accessToken,
Promise<Void> saslPromise) throws IOException {
SaslDataTransferClient saslClient = client.getSaslDataTransferClient();
SaslPropertiesResolver saslPropsResolver = SASL_ADAPTOR.getSaslPropsResolver(saslClient);
TrustedChannelResolver trustedChannelResolver =
SASL_ADAPTOR.getTrustedChannelResolver(saslClient);
SASL_ADAPTOR.getTrustedChannelResolver(saslClient);
AtomicBoolean fallbackToSimpleAuth = SASL_ADAPTOR.getFallbackToSimpleAuth(saslClient);
InetAddress addr = ((InetSocketAddress) channel.remoteAddress()).getAddress();
if (trustedChannelResolver.isTrusted() || trustedChannelResolver.isTrusted(addr)) {
@ -805,24 +802,23 @@ public final class FanOutOneBlockAsyncDFSOutputSaslHelper {
}
doSaslNegotiation(conf, channel, timeoutMs, getUserNameFromEncryptionKey(encryptionKey),
encryptionKeyToPassword(encryptionKey.encryptionKey),
createSaslPropertiesForEncryption(encryptionKey.encryptionAlgorithm), saslPromise,
client);
createSaslPropertiesForEncryption(encryptionKey.encryptionAlgorithm), saslPromise, client);
} else if (!UserGroupInformation.isSecurityEnabled()) {
if (LOG.isDebugEnabled()) {
LOG.debug("SASL client skipping handshake in unsecured configuration for addr = " + addr
+ ", datanodeId = " + dnInfo);
+ ", datanodeId = " + dnInfo);
}
saslPromise.trySuccess(null);
} else if (dnInfo.getXferPort() < 1024) {
if (LOG.isDebugEnabled()) {
LOG.debug("SASL client skipping handshake in secured configuration with "
+ "privileged port for addr = " + addr + ", datanodeId = " + dnInfo);
+ "privileged port for addr = " + addr + ", datanodeId = " + dnInfo);
}
saslPromise.trySuccess(null);
} else if (fallbackToSimpleAuth != null && fallbackToSimpleAuth.get()) {
if (LOG.isDebugEnabled()) {
LOG.debug("SASL client skipping handshake in secured configuration with "
+ "unsecured cluster for addr = " + addr + ", datanodeId = " + dnInfo);
+ "unsecured cluster for addr = " + addr + ", datanodeId = " + dnInfo);
}
saslPromise.trySuccess(null);
} else if (saslPropsResolver != null) {
@ -832,21 +828,21 @@ public final class FanOutOneBlockAsyncDFSOutputSaslHelper {
}
doSaslNegotiation(conf, channel, timeoutMs, buildUsername(accessToken),
buildClientPassword(accessToken), saslPropsResolver.getClientProperties(addr), saslPromise,
client);
client);
} else {
// It's a secured cluster using non-privileged ports, but no SASL. The only way this can
// happen is if the DataNode has ignore.secure.ports.for.testing configured, so this is a rare
// edge case.
if (LOG.isDebugEnabled()) {
LOG.debug("SASL client skipping handshake in secured configuration with no SASL "
+ "protection configured for addr = " + addr + ", datanodeId = " + dnInfo);
+ "protection configured for addr = " + addr + ", datanodeId = " + dnInfo);
}
saslPromise.trySuccess(null);
}
}
static Encryptor createEncryptor(Configuration conf, HdfsFileStatus stat, DFSClient client)
throws IOException {
throws IOException {
FileEncryptionInfo feInfo = stat.getFileEncryptionInfo();
if (feInfo == null) {
return null;

View File

@ -1,4 +1,4 @@
/**
/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
@ -17,33 +17,29 @@
*/
package org.apache.hadoop.hbase.io.asyncfs;
import java.lang.reflect.InvocationTargetException;
import java.lang.reflect.Method;
import java.util.List;
import org.apache.yetus.audience.InterfaceAudience;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.apache.hbase.thirdparty.io.netty.buffer.ByteBuf;
import org.apache.hbase.thirdparty.io.netty.buffer.ByteBufUtil;
import org.apache.hbase.thirdparty.io.netty.channel.ChannelHandlerContext;
import org.apache.hbase.thirdparty.io.netty.handler.codec.MessageToMessageDecoder;
import org.apache.hbase.thirdparty.io.netty.util.internal.ObjectUtil;
import org.apache.yetus.audience.InterfaceAudience;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import java.lang.reflect.InvocationTargetException;
import java.lang.reflect.Method;
import java.util.List;
/**
* Modified based on io.netty.handler.codec.protobuf.ProtobufDecoder.
* The Netty's ProtobufDecode supports unshaded protobuf messages (com.google.protobuf).
*
* Hadoop 3.3.0 and above relocates protobuf classes to a shaded jar (hadoop-thirdparty), and
* so we must use reflection to detect which one (relocated or not) to use.
*
* Do not use this to process HBase's shaded protobuf messages. This is meant to process the
* protobuf messages in HDFS for the asyncfs use case.
* */
* Modified based on io.netty.handler.codec.protobuf.ProtobufDecoder. The Netty's ProtobufDecode
* supports unshaded protobuf messages (com.google.protobuf). Hadoop 3.3.0 and above relocates
* protobuf classes to a shaded jar (hadoop-thirdparty), and so we must use reflection to detect
* which one (relocated or not) to use. Do not use this to process HBase's shaded protobuf messages.
* This is meant to process the protobuf messages in HDFS for the asyncfs use case.
*/
@InterfaceAudience.Private
public class ProtobufDecoder extends MessageToMessageDecoder<ByteBuf> {
private static final Logger LOG =
LoggerFactory.getLogger(ProtobufDecoder.class);
private static final Logger LOG = LoggerFactory.getLogger(ProtobufDecoder.class);
private static Class<?> protobufMessageLiteClass = null;
private static Class<?> protobufMessageLiteBuilderClass = null;
@ -60,23 +56,22 @@ public class ProtobufDecoder extends MessageToMessageDecoder<ByteBuf> {
private Object parser;
private Object builder;
public ProtobufDecoder(Object prototype) {
try {
Method getDefaultInstanceForTypeMethod = protobufMessageLiteClass.getMethod(
"getDefaultInstanceForType");
Object prototype1 = getDefaultInstanceForTypeMethod
.invoke(ObjectUtil.checkNotNull(prototype, "prototype"));
Method getDefaultInstanceForTypeMethod =
protobufMessageLiteClass.getMethod("getDefaultInstanceForType");
Object prototype1 =
getDefaultInstanceForTypeMethod.invoke(ObjectUtil.checkNotNull(prototype, "prototype"));
// parser = prototype.getParserForType()
parser = getParserForTypeMethod.invoke(prototype1);
parseFromMethod = parser.getClass().getMethod(
"parseFrom", byte[].class, int.class, int.class);
parseFromMethod =
parser.getClass().getMethod("parseFrom", byte[].class, int.class, int.class);
// builder = prototype.newBuilderForType();
builder = newBuilderForTypeMethod.invoke(prototype1);
mergeFromMethod = builder.getClass().getMethod(
"mergeFrom", byte[].class, int.class, int.class);
mergeFromMethod =
builder.getClass().getMethod("mergeFrom", byte[].class, int.class, int.class);
// All protobuf message builders inherits from MessageLite.Builder
buildMethod = protobufMessageLiteBuilderClass.getDeclaredMethod("build");
@ -88,8 +83,7 @@ public class ProtobufDecoder extends MessageToMessageDecoder<ByteBuf> {
}
}
protected void decode(
ChannelHandlerContext ctx, ByteBuf msg, List<Object> out) throws Exception {
protected void decode(ChannelHandlerContext ctx, ByteBuf msg, List<Object> out) throws Exception {
int length = msg.readableBytes();
byte[] array;
int offset;
@ -122,8 +116,8 @@ public class ProtobufDecoder extends MessageToMessageDecoder<ByteBuf> {
try {
protobufMessageLiteClass = Class.forName("org.apache.hadoop.thirdparty.protobuf.MessageLite");
protobufMessageLiteBuilderClass = Class.forName(
"org.apache.hadoop.thirdparty.protobuf.MessageLite$Builder");
protobufMessageLiteBuilderClass =
Class.forName("org.apache.hadoop.thirdparty.protobuf.MessageLite$Builder");
LOG.debug("Hadoop 3.3 and above shades protobuf.");
} catch (ClassNotFoundException e) {
LOG.debug("Hadoop 3.2 and below use unshaded protobuf.", e);

View File

@ -1,4 +1,4 @@
/**
/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information

View File

@ -1,4 +1,4 @@
/**
/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
@ -22,7 +22,6 @@ import java.nio.ByteBuffer;
import java.util.concurrent.CompletableFuture;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
import org.apache.hadoop.fs.FSDataOutputStream;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.hbase.io.ByteArrayOutputStream;
@ -50,7 +49,7 @@ public class WrapperAsyncFSOutput implements AsyncFSOutput {
public WrapperAsyncFSOutput(Path file, FSDataOutputStream out) {
this.out = out;
this.executor = Executors.newSingleThreadExecutor(new ThreadFactoryBuilder().setDaemon(true)
.setNameFormat("AsyncFSOutputFlusher-" + file.toString().replace("%", "%%")).build());
.setNameFormat("AsyncFSOutputFlusher-" + file.toString().replace("%", "%%")).build());
}
@Override
@ -95,8 +94,8 @@ public class WrapperAsyncFSOutput implements AsyncFSOutput {
}
long pos = out.getPos();
/**
* This flush0 method could only be called by single thread, so here we could
* safely overwrite without any synchronization.
* This flush0 method could only be called by single thread, so here we could safely overwrite
* without any synchronization.
*/
this.syncedLength = pos;
future.complete(pos);

View File

@ -56,24 +56,23 @@ public class ExcludeDatanodeManager implements ConfigurationObserver {
private final int maxExcludeDNCount;
private final Configuration conf;
// This is a map of providerId->StreamSlowMonitor
private final Map<String, StreamSlowMonitor> streamSlowMonitors =
new ConcurrentHashMap<>(1);
private final Map<String, StreamSlowMonitor> streamSlowMonitors = new ConcurrentHashMap<>(1);
public ExcludeDatanodeManager(Configuration conf) {
this.conf = conf;
this.maxExcludeDNCount = conf.getInt(WAL_MAX_EXCLUDE_SLOW_DATANODE_COUNT_KEY,
DEFAULT_WAL_MAX_EXCLUDE_SLOW_DATANODE_COUNT);
this.excludeDNsCache = CacheBuilder.newBuilder()
.expireAfterWrite(this.conf.getLong(WAL_EXCLUDE_DATANODE_TTL_KEY,
DEFAULT_WAL_EXCLUDE_DATANODE_TTL), TimeUnit.HOURS)
.maximumSize(this.maxExcludeDNCount)
.build();
.expireAfterWrite(
this.conf.getLong(WAL_EXCLUDE_DATANODE_TTL_KEY, DEFAULT_WAL_EXCLUDE_DATANODE_TTL),
TimeUnit.HOURS)
.maximumSize(this.maxExcludeDNCount).build();
}
/**
* Try to add a datanode to the regionserver excluding cache
* @param datanodeInfo the datanode to be added to the excluded cache
* @param cause the cause that the datanode is hope to be excluded
* @param cause the cause that the datanode is hope to be excluded
* @return True if the datanode is added to the regionserver excluding cache, false otherwise
*/
public boolean tryAddExcludeDN(DatanodeInfo datanodeInfo, String cause) {
@ -85,15 +84,15 @@ public class ExcludeDatanodeManager implements ConfigurationObserver {
datanodeInfo, cause, excludeDNsCache.size());
return true;
}
LOG.debug("Try add datanode {} to exclude cache by [{}] failed, "
+ "current exclude DNs are {}", datanodeInfo, cause, getExcludeDNs().keySet());
LOG.debug(
"Try add datanode {} to exclude cache by [{}] failed, " + "current exclude DNs are {}",
datanodeInfo, cause, getExcludeDNs().keySet());
return false;
}
public StreamSlowMonitor getStreamSlowMonitor(String name) {
String key = name == null || name.isEmpty() ? "defaultMonitorName" : name;
return streamSlowMonitors
.computeIfAbsent(key, k -> new StreamSlowMonitor(conf, key, this));
return streamSlowMonitors.computeIfAbsent(key, k -> new StreamSlowMonitor(conf, key, this));
}
public Map<DatanodeInfo, Long> getExcludeDNs() {
@ -105,10 +104,12 @@ public class ExcludeDatanodeManager implements ConfigurationObserver {
for (StreamSlowMonitor monitor : streamSlowMonitors.values()) {
monitor.onConfigurationChange(conf);
}
this.excludeDNsCache = CacheBuilder.newBuilder().expireAfterWrite(
this.conf.getLong(WAL_EXCLUDE_DATANODE_TTL_KEY, DEFAULT_WAL_EXCLUDE_DATANODE_TTL),
TimeUnit.HOURS).maximumSize(this.conf
.getInt(WAL_MAX_EXCLUDE_SLOW_DATANODE_COUNT_KEY, DEFAULT_WAL_MAX_EXCLUDE_SLOW_DATANODE_COUNT))
this.excludeDNsCache = CacheBuilder.newBuilder()
.expireAfterWrite(
this.conf.getLong(WAL_EXCLUDE_DATANODE_TTL_KEY, DEFAULT_WAL_EXCLUDE_DATANODE_TTL),
TimeUnit.HOURS)
.maximumSize(this.conf.getInt(WAL_MAX_EXCLUDE_SLOW_DATANODE_COUNT_KEY,
DEFAULT_WAL_MAX_EXCLUDE_SLOW_DATANODE_COUNT))
.build();
}
}

View File

@ -38,18 +38,16 @@ import org.apache.hbase.thirdparty.com.google.common.cache.CacheLoader;
import org.apache.hbase.thirdparty.com.google.common.cache.LoadingCache;
/**
* Class for monitor the wal file flush performance.
* Each active wal file has a StreamSlowMonitor.
* Class for monitor the wal file flush performance. Each active wal file has a StreamSlowMonitor.
*/
@InterfaceAudience.Private
public class StreamSlowMonitor implements ConfigurationObserver {
private static final Logger LOG = LoggerFactory.getLogger(StreamSlowMonitor.class);
/**
* Configure for the min count for a datanode detected slow.
* If a datanode is detected slow times up to this count, then it will be added to the exclude
* datanode cache by {@link ExcludeDatanodeManager#tryAddExcludeDN(DatanodeInfo, String)}
* of this regionsever.
* Configure for the min count for a datanode detected slow. If a datanode is detected slow times
* up to this count, then it will be added to the exclude datanode cache by
* {@link ExcludeDatanodeManager#tryAddExcludeDN(DatanodeInfo, String)} of this regionsever.
*/
private static final String WAL_SLOW_DETECT_MIN_COUNT_KEY =
"hbase.regionserver.async.wal.min.slow.detect.count";
@ -63,9 +61,9 @@ public class StreamSlowMonitor implements ConfigurationObserver {
private static final long DEFAULT_WAL_SLOW_DETECT_DATA_TTL = 10 * 60 * 1000; // 10min in ms
/**
* Configure for the speed check of packet min length.
* For packets whose data length smaller than this value, check slow by processing time.
* While for packets whose data length larger than this value, check slow by flushing speed.
* Configure for the speed check of packet min length. For packets whose data length smaller than
* this value, check slow by processing time. While for packets whose data length larger than this
* value, check slow by flushing speed.
*/
private static final String DATANODE_PACKET_FLUSH_CHECK_SPEED_MIN_DATA_LENGTH_KEY =
"hbase.regionserver.async.wal.datanode.slow.check.speed.packet.data.length.min";
@ -73,8 +71,8 @@ public class StreamSlowMonitor implements ConfigurationObserver {
private static final long DEFAULT_DATANODE_PACKET_FLUSH_CHECK_SPEED_MIN_DATA_LENGTH = 64 * 1024;
/**
* Configure for the slow packet process time, a duration from send to ACK.
* The processing time check is for packets that data length smaller than
* Configure for the slow packet process time, a duration from send to ACK. The processing time
* check is for packets that data length smaller than
* {@link StreamSlowMonitor#DATANODE_PACKET_FLUSH_CHECK_SPEED_MIN_DATA_LENGTH_KEY}
*/
public static final String DATANODE_SLOW_PACKET_PROCESS_TIME_KEY =
@ -105,15 +103,16 @@ public class StreamSlowMonitor implements ConfigurationObserver {
private long minLengthForSpeedCheck;
public StreamSlowMonitor(Configuration conf, String name,
ExcludeDatanodeManager excludeDatanodeManager) {
ExcludeDatanodeManager excludeDatanodeManager) {
setConf(conf);
this.name = name;
this.excludeDatanodeManager = excludeDatanodeManager;
this.datanodeSlowDataQueue = CacheBuilder.newBuilder()
.maximumSize(conf.getInt(WAL_MAX_EXCLUDE_SLOW_DATANODE_COUNT_KEY,
DEFAULT_WAL_MAX_EXCLUDE_SLOW_DATANODE_COUNT))
.expireAfterWrite(conf.getLong(WAL_EXCLUDE_DATANODE_TTL_KEY,
DEFAULT_WAL_EXCLUDE_DATANODE_TTL), TimeUnit.HOURS)
.expireAfterWrite(
conf.getLong(WAL_EXCLUDE_DATANODE_TTL_KEY, DEFAULT_WAL_EXCLUDE_DATANODE_TTL),
TimeUnit.HOURS)
.build(new CacheLoader<DatanodeInfo, Deque<PacketAckData>>() {
@Override
public Deque<PacketAckData> load(DatanodeInfo key) throws Exception {
@ -129,30 +128,33 @@ public class StreamSlowMonitor implements ConfigurationObserver {
/**
* Check if the packet process time shows that the relevant datanode is a slow node.
* @param datanodeInfo the datanode that processed the packet
* @param packetDataLen the data length of the packet (in bytes)
* @param processTimeMs the process time (in ms) of the packet on the datanode,
* @param datanodeInfo the datanode that processed the packet
* @param packetDataLen the data length of the packet (in bytes)
* @param processTimeMs the process time (in ms) of the packet on the datanode,
* @param lastAckTimestamp the last acked timestamp of the packet on another datanode
* @param unfinished if the packet is unfinished flushed to the datanode replicas
* @param unfinished if the packet is unfinished flushed to the datanode replicas
*/
public void checkProcessTimeAndSpeed(DatanodeInfo datanodeInfo, long packetDataLen,
long processTimeMs, long lastAckTimestamp, int unfinished) {
long processTimeMs, long lastAckTimestamp, int unfinished) {
long current = EnvironmentEdgeManager.currentTime();
// Here are two conditions used to determine whether a datanode is slow,
// 1. For small packet, we just have a simple time limit, without considering
// the size of the packet.
// 2. For large packet, we will calculate the speed, and check if the speed is too slow.
boolean slow = (packetDataLen <= minLengthForSpeedCheck && processTimeMs > slowPacketAckMs) || (
packetDataLen > minLengthForSpeedCheck
boolean slow = (packetDataLen <= minLengthForSpeedCheck && processTimeMs > slowPacketAckMs)
|| (packetDataLen > minLengthForSpeedCheck
&& (double) packetDataLen / processTimeMs < minPacketFlushSpeedKBs);
if (slow) {
// Check if large diff ack timestamp between replicas,
// should try to avoid misjudgments that caused by GC STW.
if ((lastAckTimestamp > 0 && current - lastAckTimestamp > slowPacketAckMs / 2) || (
lastAckTimestamp <= 0 && unfinished == 0)) {
LOG.info("Slow datanode: {}, data length={}, duration={}ms, unfinishedReplicas={}, "
+ "lastAckTimestamp={}, monitor name: {}", datanodeInfo, packetDataLen, processTimeMs,
unfinished, lastAckTimestamp, this.name);
if (
(lastAckTimestamp > 0 && current - lastAckTimestamp > slowPacketAckMs / 2)
|| (lastAckTimestamp <= 0 && unfinished == 0)
) {
LOG.info(
"Slow datanode: {}, data length={}, duration={}ms, unfinishedReplicas={}, "
+ "lastAckTimestamp={}, monitor name: {}",
datanodeInfo, packetDataLen, processTimeMs, unfinished, lastAckTimestamp, this.name);
if (addSlowAckData(datanodeInfo, packetDataLen, processTimeMs)) {
excludeDatanodeManager.tryAddExcludeDN(datanodeInfo, "slow packet ack");
}
@ -168,8 +170,10 @@ public class StreamSlowMonitor implements ConfigurationObserver {
private boolean addSlowAckData(DatanodeInfo datanodeInfo, long dataLength, long processTime) {
Deque<PacketAckData> slowDNQueue = datanodeSlowDataQueue.getUnchecked(datanodeInfo);
long current = EnvironmentEdgeManager.currentTime();
while (!slowDNQueue.isEmpty() && (current - slowDNQueue.getFirst().getTimestamp() > slowDataTtl
|| slowDNQueue.size() >= minSlowDetectCount)) {
while (
!slowDNQueue.isEmpty() && (current - slowDNQueue.getFirst().getTimestamp() > slowDataTtl
|| slowDNQueue.size() >= minSlowDetectCount)
) {
slowDNQueue.removeFirst();
}
slowDNQueue.addLast(new PacketAckData(dataLength, processTime));
@ -177,13 +181,13 @@ public class StreamSlowMonitor implements ConfigurationObserver {
}
private void setConf(Configuration conf) {
this.minSlowDetectCount = conf.getInt(WAL_SLOW_DETECT_MIN_COUNT_KEY,
DEFAULT_WAL_SLOW_DETECT_MIN_COUNT);
this.minSlowDetectCount =
conf.getInt(WAL_SLOW_DETECT_MIN_COUNT_KEY, DEFAULT_WAL_SLOW_DETECT_MIN_COUNT);
this.slowDataTtl = conf.getLong(WAL_SLOW_DETECT_DATA_TTL_KEY, DEFAULT_WAL_SLOW_DETECT_DATA_TTL);
this.slowPacketAckMs = conf.getLong(DATANODE_SLOW_PACKET_PROCESS_TIME_KEY,
DEFAULT_DATANODE_SLOW_PACKET_PROCESS_TIME);
this.minLengthForSpeedCheck = conf.getLong(
DATANODE_PACKET_FLUSH_CHECK_SPEED_MIN_DATA_LENGTH_KEY,
DEFAULT_DATANODE_SLOW_PACKET_PROCESS_TIME);
this.minLengthForSpeedCheck =
conf.getLong(DATANODE_PACKET_FLUSH_CHECK_SPEED_MIN_DATA_LENGTH_KEY,
DEFAULT_DATANODE_PACKET_FLUSH_CHECK_SPEED_MIN_DATA_LENGTH);
this.minPacketFlushSpeedKBs = conf.getDouble(DATANODE_SLOW_PACKET_FLUSH_MIN_SPEED_KEY,
DEFAULT_DATANODE_SLOW_PACKET_FLUSH_MIN_SPEED);

View File

@ -1,5 +1,4 @@
/*
*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
@ -21,8 +20,8 @@ package org.apache.hadoop.hbase.util;
import org.apache.yetus.audience.InterfaceAudience;
/**
* Similar interface as {@link org.apache.hadoop.util.Progressable} but returns
* a boolean to support canceling the operation.
* Similar interface as {@link org.apache.hadoop.util.Progressable} but returns a boolean to support
* canceling the operation.
* <p/>
* Used for doing updating of OPENING znode during log replay on region open.
*/
@ -30,8 +29,8 @@ import org.apache.yetus.audience.InterfaceAudience;
public interface CancelableProgressable {
/**
* Report progress. Returns true if operations should continue, false if the
* operation should be canceled and rolled back.
* Report progress. Returns true if operations should continue, false if the operation should be
* canceled and rolled back.
* @return whether to continue (true) or cancel (false) the operation
*/
boolean progress();

View File

@ -1,4 +1,4 @@
/**
/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
@ -120,8 +120,10 @@ public final class RecoverLeaseFSUtils {
// Cycle here until (subsequentPause * nbAttempt) elapses. While spinning, check
// isFileClosed if available (should be in hadoop 2.0.5... not in hadoop 1 though.
long localStartWaiting = EnvironmentEdgeManager.currentTime();
while ((EnvironmentEdgeManager.currentTime() - localStartWaiting) < subsequentPauseBase *
nbAttempt) {
while (
(EnvironmentEdgeManager.currentTime() - localStartWaiting)
< subsequentPauseBase * nbAttempt
) {
Thread.sleep(conf.getInt("hbase.lease.recovery.pause", 1000));
if (findIsFileClosedMeth) {
try {
@ -152,10 +154,10 @@ public final class RecoverLeaseFSUtils {
private static boolean checkIfTimedout(final Configuration conf, final long recoveryTimeout,
final int nbAttempt, final Path p, final long startWaiting) {
if (recoveryTimeout < EnvironmentEdgeManager.currentTime()) {
LOG.warn("Cannot recoverLease after trying for " +
conf.getInt("hbase.lease.recovery.timeout", 900000) +
"ms (hbase.lease.recovery.timeout); continuing, but may be DATALOSS!!!; " +
getLogMessageDetail(nbAttempt, p, startWaiting));
LOG.warn("Cannot recoverLease after trying for "
+ conf.getInt("hbase.lease.recovery.timeout", 900000)
+ "ms (hbase.lease.recovery.timeout); continuing, but may be DATALOSS!!!; "
+ getLogMessageDetail(nbAttempt, p, startWaiting));
return true;
}
return false;
@ -170,8 +172,8 @@ public final class RecoverLeaseFSUtils {
boolean recovered = false;
try {
recovered = dfs.recoverLease(p);
LOG.info((recovered ? "Recovered lease, " : "Failed to recover lease, ") +
getLogMessageDetail(nbAttempt, p, startWaiting));
LOG.info((recovered ? "Recovered lease, " : "Failed to recover lease, ")
+ getLogMessageDetail(nbAttempt, p, startWaiting));
} catch (IOException e) {
if (e instanceof LeaseExpiredException && e.getMessage().contains("File does not exist")) {
// This exception comes out instead of FNFE, fix it
@ -189,8 +191,8 @@ public final class RecoverLeaseFSUtils {
*/
private static String getLogMessageDetail(final int nbAttempt, final Path p,
final long startWaiting) {
return "attempt=" + nbAttempt + " on file=" + p + " after " +
(EnvironmentEdgeManager.currentTime() - startWaiting) + "ms";
return "attempt=" + nbAttempt + " on file=" + p + " after "
+ (EnvironmentEdgeManager.currentTime() - startWaiting) + "ms";
}
/**

View File

@ -1,4 +1,4 @@
/**
/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information

View File

@ -19,6 +19,7 @@ package org.apache.hadoop.hbase.io.asyncfs;
import static org.junit.Assert.assertEquals;
import static org.junit.Assert.assertTrue;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.hbase.HBaseClassTestRule;
import org.apache.hadoop.hbase.HBaseConfiguration;
@ -44,19 +45,15 @@ public class TestExcludeDatanodeManager {
StreamSlowMonitor streamSlowDNsMonitor =
excludeDatanodeManager.getStreamSlowMonitor("testMonitor");
assertEquals(0, excludeDatanodeManager.getExcludeDNs().size());
DatanodeInfo datanodeInfo =
new DatanodeInfo.DatanodeInfoBuilder().setIpAddr("0.0.0.0").setHostName("hostname1")
.setDatanodeUuid("uuid1").setXferPort(111).setInfoPort(222).setInfoSecurePort(333)
.setIpcPort(444).setNetworkLocation("location1").build();
streamSlowDNsMonitor
.checkProcessTimeAndSpeed(datanodeInfo, 100000, 5100,
System.currentTimeMillis() - 5100, 0);
streamSlowDNsMonitor
.checkProcessTimeAndSpeed(datanodeInfo, 100000, 5100,
System.currentTimeMillis() - 5100, 0);
streamSlowDNsMonitor
.checkProcessTimeAndSpeed(datanodeInfo, 100000, 5100,
System.currentTimeMillis() - 5100, 0);
DatanodeInfo datanodeInfo = new DatanodeInfo.DatanodeInfoBuilder().setIpAddr("0.0.0.0")
.setHostName("hostname1").setDatanodeUuid("uuid1").setXferPort(111).setInfoPort(222)
.setInfoSecurePort(333).setIpcPort(444).setNetworkLocation("location1").build();
streamSlowDNsMonitor.checkProcessTimeAndSpeed(datanodeInfo, 100000, 5100,
System.currentTimeMillis() - 5100, 0);
streamSlowDNsMonitor.checkProcessTimeAndSpeed(datanodeInfo, 100000, 5100,
System.currentTimeMillis() - 5100, 0);
streamSlowDNsMonitor.checkProcessTimeAndSpeed(datanodeInfo, 100000, 5100,
System.currentTimeMillis() - 5100, 0);
assertEquals(1, excludeDatanodeManager.getExcludeDNs().size());
assertTrue(excludeDatanodeManager.getExcludeDNs().containsKey(datanodeInfo));
}
@ -68,19 +65,15 @@ public class TestExcludeDatanodeManager {
StreamSlowMonitor streamSlowDNsMonitor =
excludeDatanodeManager.getStreamSlowMonitor("testMonitor");
assertEquals(0, excludeDatanodeManager.getExcludeDNs().size());
DatanodeInfo datanodeInfo =
new DatanodeInfo.DatanodeInfoBuilder().setIpAddr("0.0.0.0").setHostName("hostname1")
.setDatanodeUuid("uuid1").setXferPort(111).setInfoPort(222).setInfoSecurePort(333)
.setIpcPort(444).setNetworkLocation("location1").build();
streamSlowDNsMonitor
.checkProcessTimeAndSpeed(datanodeInfo, 5000, 7000,
System.currentTimeMillis() - 7000, 0);
streamSlowDNsMonitor
.checkProcessTimeAndSpeed(datanodeInfo, 5000, 7000,
System.currentTimeMillis() - 7000, 0);
streamSlowDNsMonitor
.checkProcessTimeAndSpeed(datanodeInfo, 5000, 7000,
System.currentTimeMillis() - 7000, 0);
DatanodeInfo datanodeInfo = new DatanodeInfo.DatanodeInfoBuilder().setIpAddr("0.0.0.0")
.setHostName("hostname1").setDatanodeUuid("uuid1").setXferPort(111).setInfoPort(222)
.setInfoSecurePort(333).setIpcPort(444).setNetworkLocation("location1").build();
streamSlowDNsMonitor.checkProcessTimeAndSpeed(datanodeInfo, 5000, 7000,
System.currentTimeMillis() - 7000, 0);
streamSlowDNsMonitor.checkProcessTimeAndSpeed(datanodeInfo, 5000, 7000,
System.currentTimeMillis() - 7000, 0);
streamSlowDNsMonitor.checkProcessTimeAndSpeed(datanodeInfo, 5000, 7000,
System.currentTimeMillis() - 7000, 0);
assertEquals(1, excludeDatanodeManager.getExcludeDNs().size());
assertTrue(excludeDatanodeManager.getExcludeDNs().containsKey(datanodeInfo));
}

View File

@ -1,4 +1,4 @@
/**
/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
@ -57,6 +57,7 @@ import org.junit.experimental.categories.Category;
import org.junit.rules.TestName;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.apache.hbase.thirdparty.io.netty.channel.Channel;
import org.apache.hbase.thirdparty.io.netty.channel.EventLoop;
import org.apache.hbase.thirdparty.io.netty.channel.EventLoopGroup;
@ -240,9 +241,9 @@ public class TestFanOutOneBlockAsyncDFSOutput extends AsyncFSTestBase {
StreamSlowMonitor streamSlowDNsMonitor =
excludeDatanodeManager.getStreamSlowMonitor("testMonitor");
assertEquals(0, excludeDatanodeManager.getExcludeDNs().size());
try (FanOutOneBlockAsyncDFSOutput output = FanOutOneBlockAsyncDFSOutputHelper.createOutput(FS,
f, true, false, (short) 3, FS.getDefaultBlockSize(), eventLoop,
CHANNEL_CLASS, streamSlowDNsMonitor)) {
try (FanOutOneBlockAsyncDFSOutput output =
FanOutOneBlockAsyncDFSOutputHelper.createOutput(FS, f, true, false, (short) 3,
FS.getDefaultBlockSize(), eventLoop, CHANNEL_CLASS, streamSlowDNsMonitor)) {
// should exclude the dead dn when retry so here we only have 2 DNs in pipeline
assertEquals(2, output.getPipeline().length);
assertEquals(1, excludeDatanodeManager.getExcludeDNs().size());

View File

@ -1,4 +1,4 @@
/**
/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
@ -47,6 +47,7 @@ import org.junit.experimental.categories.Category;
import org.junit.rules.TestName;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.apache.hbase.thirdparty.io.netty.buffer.ByteBuf;
import org.apache.hbase.thirdparty.io.netty.channel.Channel;
import org.apache.hbase.thirdparty.io.netty.channel.ChannelHandlerContext;
@ -70,10 +71,10 @@ public class TestFanOutOneBlockAsyncDFSOutputHang extends AsyncFSTestBase {
@ClassRule
public static final HBaseClassTestRule CLASS_RULE =
HBaseClassTestRule.forClass(TestFanOutOneBlockAsyncDFSOutputHang.class);
HBaseClassTestRule.forClass(TestFanOutOneBlockAsyncDFSOutputHang.class);
private static final Logger LOG =
LoggerFactory.getLogger(TestFanOutOneBlockAsyncDFSOutputHang.class);
LoggerFactory.getLogger(TestFanOutOneBlockAsyncDFSOutputHang.class);
private static DistributedFileSystem FS;

View File

@ -1,4 +1,4 @@
/**
/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information

View File

@ -1,4 +1,4 @@
/**
/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information

View File

@ -1,4 +1,4 @@
/**
/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information

View File

@ -1,4 +1,4 @@
/**
/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
@ -31,7 +31,7 @@ public class TestSendBufSizePredictor {
@ClassRule
public static final HBaseClassTestRule CLASS_RULE =
HBaseClassTestRule.forClass(TestSendBufSizePredictor.class);
HBaseClassTestRule.forClass(TestSendBufSizePredictor.class);
@Test
public void test() {

View File

@ -110,9 +110,9 @@ public final class HBaseKerberosUtils {
/**
* Set up configuration for a secure HDFS+HBase cluster.
* @param conf configuration object.
* @param conf configuration object.
* @param servicePrincipal service principal used by NN, HM and RS.
* @param spnegoPrincipal SPNEGO principal used by NN web UI.
* @param spnegoPrincipal SPNEGO principal used by NN web UI.
*/
public static void setSecuredConfiguration(Configuration conf, String servicePrincipal,
String spnegoPrincipal) {
@ -156,7 +156,7 @@ public final class HBaseKerberosUtils {
/**
* Set up SSL configuration for HDFS NameNode and DataNode.
* @param utility a HBaseTestingUtility object.
* @param clazz the caller test class.
* @param clazz the caller test class.
* @throws Exception if unable to set up SSL configuration
*/
public static void setSSLConfiguration(HBaseCommonTestingUtil utility, Class<?> clazz)

View File

@ -1,4 +1,4 @@
/**
/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
@ -20,7 +20,6 @@ package org.apache.hadoop.hbase.util;
import static org.junit.Assert.assertTrue;
import java.io.IOException;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.hbase.HBaseClassTestRule;
@ -69,8 +68,8 @@ public class TestRecoverLeaseFSUtils {
Mockito.verify(dfs, Mockito.times(5)).recoverLease(FILE);
// Make sure we waited at least hbase.lease.recovery.dfs.timeout * 3 (the first two
// invocations will happen pretty fast... the we fall into the longer wait loop).
assertTrue((EnvironmentEdgeManager.currentTime() - startTime) > (3 *
HTU.getConfiguration().getInt("hbase.lease.recovery.dfs.timeout", 61000)));
assertTrue((EnvironmentEdgeManager.currentTime() - startTime)
> (3 * HTU.getConfiguration().getInt("hbase.lease.recovery.dfs.timeout", 61000)));
}
/**

View File

@ -1,4 +1,4 @@
<?xml version="1.0"?>
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="https://maven.apache.org/POM/4.0.0" xmlns:xsi="https://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="https://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd">
<!--
/**
@ -21,34 +21,14 @@
-->
<modelVersion>4.0.0</modelVersion>
<parent>
<artifactId>hbase-build-configuration</artifactId>
<groupId>org.apache.hbase</groupId>
<artifactId>hbase-build-configuration</artifactId>
<version>3.0.0-alpha-3-SNAPSHOT</version>
<relativePath>../hbase-build-configuration</relativePath>
</parent>
<artifactId>hbase-backup</artifactId>
<name>Apache HBase - Backup</name>
<description>Backup for HBase</description>
<build>
<plugins>
<plugin>
<!--Make it so assembly:single does nothing in here-->
<artifactId>maven-assembly-plugin</artifactId>
<configuration>
<skipAssembly>true</skipAssembly>
</configuration>
</plugin>
<!-- Make a jar and put the sources in the jar -->
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-source-plugin</artifactId>
</plugin>
<plugin>
<groupId>net.revelc.code</groupId>
<artifactId>warbucks-maven-plugin</artifactId>
</plugin>
</plugins>
</build>
<dependencies>
<!-- Intra-project dependencies -->
<dependency>
@ -173,12 +153,34 @@
<scope>test</scope>
</dependency>
</dependencies>
<build>
<plugins>
<plugin>
<!--Make it so assembly:single does nothing in here-->
<artifactId>maven-assembly-plugin</artifactId>
<configuration>
<skipAssembly>true</skipAssembly>
</configuration>
</plugin>
<!-- Make a jar and put the sources in the jar -->
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-source-plugin</artifactId>
</plugin>
<plugin>
<groupId>net.revelc.code</groupId>
<artifactId>warbucks-maven-plugin</artifactId>
</plugin>
</plugins>
</build>
<profiles>
<!-- Profile for building against Hadoop 3.0.0. Activate by default -->
<profile>
<id>hadoop-3.0</id>
<activation>
<property><name>!hadoop.profile</name></property>
<property>
<name>!hadoop.profile</name>
</property>
</activation>
<dependencies>
<dependency>
@ -213,8 +215,7 @@
<artifactId>lifecycle-mapping</artifactId>
<configuration>
<lifecycleMappingMetadata>
<pluginExecutions>
</pluginExecutions>
<pluginExecutions/>
</lifecycleMappingMetadata>
</configuration>
</plugin>

View File

@ -1,4 +1,4 @@
/**
/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
@ -15,13 +15,11 @@
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.hadoop.hbase.backup;
import java.io.Closeable;
import java.io.IOException;
import java.util.List;
import org.apache.hadoop.hbase.TableName;
import org.apache.hadoop.hbase.backup.util.BackupSet;
import org.apache.yetus.audience.InterfaceAudience;
@ -30,8 +28,8 @@ import org.apache.yetus.audience.InterfaceAudience;
* The administrative API for HBase Backup. Construct an instance and call {@link #close()}
* afterwards.
* <p>
* BackupAdmin can be used to create backups, restore data from backups and for other
* backup-related operations.
* BackupAdmin can be used to create backups, restore data from backups and for other backup-related
* operations.
* @since 2.0
*/
@InterfaceAudience.Private
@ -71,9 +69,9 @@ public interface BackupAdmin extends Closeable {
/**
* Merge backup images command
* @param backupIds array of backup ids of images to be merged
* The resulting backup image will have the same backup id as the most
* recent image from a list of images to be merged
* @param backupIds array of backup ids of images to be merged The resulting backup image will
* have the same backup id as the most recent image from a list of images to be
* merged
* @throws IOException exception
*/
void mergeBackups(String[] backupIds) throws IOException;
@ -120,7 +118,7 @@ public interface BackupAdmin extends Closeable {
/**
* Add tables to backup set command
* @param name name of backup set.
* @param name name of backup set.
* @param tables array of tables to be added to this set.
* @throws IOException exception
*/
@ -128,7 +126,7 @@ public interface BackupAdmin extends Closeable {
/**
* Remove tables from backup set
* @param name name of backup set.
* @param name name of backup set.
* @param tables array of tables to be removed from this set.
* @throws IOException exception
*/

View File

@ -1,4 +1,4 @@
/**
/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
@ -18,13 +18,11 @@
package org.apache.hadoop.hbase.backup;
import java.io.IOException;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.hbase.backup.impl.FullTableBackupClient;
import org.apache.hadoop.hbase.backup.impl.IncrementalTableBackupClient;
import org.apache.hadoop.hbase.backup.impl.TableBackupClient;
import org.apache.hadoop.hbase.client.Connection;
import org.apache.yetus.audience.InterfaceAudience;
@InterfaceAudience.Private

View File

@ -1,4 +1,4 @@
/**
/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
@ -15,11 +15,9 @@
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.hadoop.hbase.backup;
import java.io.IOException;
import org.apache.hadoop.conf.Configurable;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.hbase.backup.impl.BackupManager;
@ -34,16 +32,16 @@ import org.apache.yetus.audience.InterfaceAudience;
public interface BackupCopyJob extends Configurable {
/**
* Copy backup data to destination
* @param backupInfo context object
* @param backupInfo context object
* @param backupManager backup manager
* @param conf configuration
* @param backupType backup type (FULL or INCREMENTAL)
* @param options array of options (implementation-specific)
* @param conf configuration
* @param backupType backup type (FULL or INCREMENTAL)
* @param options array of options (implementation-specific)
* @return result (0 - success, -1 failure )
* @throws IOException exception
*/
int copy(BackupInfo backupInfo, BackupManager backupManager, Configuration conf,
BackupType backupType, String[] options) throws IOException;
BackupType backupType, String[] options) throws IOException;
/**
* Cancel copy job

View File

@ -1,4 +1,4 @@
/**
/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
@ -58,9 +58,7 @@ import org.slf4j.LoggerFactory;
import org.apache.hbase.thirdparty.org.apache.commons.cli.CommandLine;
/**
*
* Command-line entry point for backup operation
*
*/
@InterfaceAudience.Private
public class BackupDriver extends AbstractHBaseTool {

View File

@ -23,7 +23,6 @@ import java.util.HashSet;
import java.util.List;
import java.util.Map;
import java.util.Set;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FileStatus;
import org.apache.hadoop.fs.Path;
@ -54,7 +53,7 @@ public class BackupHFileCleaner extends BaseHFileCleanerDelegate implements Abor
private Connection connection;
private long prevReadFromBackupTbl = 0, // timestamp of most recent read from backup:system table
secondPrevReadFromBackupTbl = 0; // timestamp of 2nd most recent read from backup:system table
//used by unit test to skip reading backup:system
// used by unit test to skip reading backup:system
private boolean checkForFullyBackedUpTables = true;
private List<TableName> fullyBackedUpTables = null;
@ -79,8 +78,7 @@ public class BackupHFileCleaner extends BaseHFileCleanerDelegate implements Abor
connection = ConnectionFactory.createConnection(conf);
}
try (BackupSystemTable tbl = new BackupSystemTable(connection)) {
Map<byte[], List<Path>>[] res =
tbl.readBulkLoadedFiles(null, tableList);
Map<byte[], List<Path>>[] res = tbl.readBulkLoadedFiles(null, tableList);
secondPrevReadFromBackupTbl = prevReadFromBackupTbl;
prevReadFromBackupTbl = EnvironmentEdgeManager.currentTime();
return getFilenameFromBulkLoad(res);
@ -91,6 +89,7 @@ public class BackupHFileCleaner extends BaseHFileCleanerDelegate implements Abor
void setCheckForFullyBackedUpTables(boolean b) {
checkForFullyBackedUpTables = b;
}
@Override
public Iterable<FileStatus> getDeletableFiles(Iterable<FileStatus> files) {
if (conf == null) {

View File

@ -1,4 +1,4 @@
/**
/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
@ -15,7 +15,6 @@
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.hadoop.hbase.backup;
import java.io.IOException;
@ -35,6 +34,7 @@ import org.apache.hadoop.hbase.util.Bytes;
import org.apache.yetus.audience.InterfaceAudience;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil;
import org.apache.hadoop.hbase.shaded.protobuf.generated.BackupProtos;
import org.apache.hadoop.hbase.shaded.protobuf.generated.BackupProtos.BackupInfo.Builder;
@ -59,7 +59,10 @@ public class BackupInfo implements Comparable<BackupInfo> {
* Backup session states
*/
public enum BackupState {
RUNNING, COMPLETE, FAILED, ANY
RUNNING,
COMPLETE,
FAILED,
ANY
}
/**
@ -67,7 +70,12 @@ public class BackupInfo implements Comparable<BackupInfo> {
* BackupState.RUNNING
*/
public enum BackupPhase {
REQUEST, SNAPSHOT, PREPARE_INCREMENTAL, SNAPSHOTCOPY, INCREMENTAL_COPY, STORE_MANIFEST
REQUEST,
SNAPSHOT,
PREPARE_INCREMENTAL,
SNAPSHOTCOPY,
INCREMENTAL_COPY,
STORE_MANIFEST
}
/**
@ -137,8 +145,8 @@ public class BackupInfo implements Comparable<BackupInfo> {
private Map<TableName, Map<String, Long>> tableSetTimestampMap;
/**
* Previous Region server log timestamps for table set after distributed log roll key -
* table name, value - map of RegionServer hostname -> last log rolled timestamp
* Previous Region server log timestamps for table set after distributed log roll key - table
* name, value - map of RegionServer hostname -> last log rolled timestamp
*/
private Map<TableName, Map<String, Long>> incrTimestampMap;
@ -198,8 +206,7 @@ public class BackupInfo implements Comparable<BackupInfo> {
return tableSetTimestampMap;
}
public void setTableSetTimestampMap(Map<TableName,
Map<String, Long>> tableSetTimestampMap) {
public void setTableSetTimestampMap(Map<TableName, Map<String, Long>> tableSetTimestampMap) {
this.tableSetTimestampMap = tableSetTimestampMap;
}
@ -357,8 +364,7 @@ public class BackupInfo implements Comparable<BackupInfo> {
* Set the new region server log timestamps after distributed log roll
* @param prevTableSetTimestampMap table timestamp map
*/
public void setIncrTimestampMap(Map<TableName,
Map<String, Long>> prevTableSetTimestampMap) {
public void setIncrTimestampMap(Map<TableName, Map<String, Long>> prevTableSetTimestampMap) {
this.incrTimestampMap = prevTableSetTimestampMap;
}
@ -482,8 +488,8 @@ public class BackupInfo implements Comparable<BackupInfo> {
context.setState(BackupInfo.BackupState.valueOf(proto.getBackupState().name()));
}
context.setHLogTargetDir(BackupUtils.getLogBackupDir(proto.getBackupRootDir(),
proto.getBackupId()));
context
.setHLogTargetDir(BackupUtils.getLogBackupDir(proto.getBackupRootDir(), proto.getBackupId()));
if (proto.hasBackupPhase()) {
context.setPhase(BackupPhase.valueOf(proto.getBackupPhase().name()));
@ -507,12 +513,12 @@ public class BackupInfo implements Comparable<BackupInfo> {
return map;
}
private static Map<TableName, Map<String, Long>> getTableSetTimestampMap(
Map<String, BackupProtos.BackupInfo.RSTimestampMap> map) {
private static Map<TableName, Map<String, Long>>
getTableSetTimestampMap(Map<String, BackupProtos.BackupInfo.RSTimestampMap> map) {
Map<TableName, Map<String, Long>> tableSetTimestampMap = new HashMap<>();
for (Entry<String, BackupProtos.BackupInfo.RSTimestampMap> entry : map.entrySet()) {
tableSetTimestampMap
.put(TableName.valueOf(entry.getKey()), entry.getValue().getRsTimestampMap());
tableSetTimestampMap.put(TableName.valueOf(entry.getKey()),
entry.getValue().getRsTimestampMap());
}
return tableSetTimestampMap;
@ -549,7 +555,7 @@ public class BackupInfo implements Comparable<BackupInfo> {
public String getStatusAndProgressAsString() {
StringBuilder sb = new StringBuilder();
sb.append("id: ").append(getBackupId()).append(" state: ").append(getState())
.append(" progress: ").append(getProgress());
.append(" progress: ").append(getProgress());
return sb.toString();
}
@ -567,7 +573,7 @@ public class BackupInfo implements Comparable<BackupInfo> {
@Override
public int compareTo(BackupInfo o) {
Long thisTS =
Long.valueOf(this.getBackupId().substring(this.getBackupId().lastIndexOf("_") + 1));
Long.valueOf(this.getBackupId().substring(this.getBackupId().lastIndexOf("_") + 1));
Long otherTS = Long.valueOf(o.getBackupId().substring(o.getBackupId().lastIndexOf("_") + 1));
return thisTS.compareTo(otherTS);
}

View File

@ -1,4 +1,4 @@
/**
/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
@ -15,11 +15,9 @@
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.hadoop.hbase.backup;
import java.io.IOException;
import org.apache.hadoop.conf.Configurable;
import org.apache.yetus.audience.InterfaceAudience;
@ -32,7 +30,6 @@ import org.apache.yetus.audience.InterfaceAudience;
public interface BackupMergeJob extends Configurable {
/**
* Run backup merge operation.
*
* @param backupIds backup image ids
* @throws IOException if the backup merge operation fails
*/

View File

@ -7,14 +7,13 @@
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.hadoop.hbase.backup;
@ -22,7 +21,6 @@ import java.io.IOException;
import java.util.List;
import java.util.Map;
import java.util.Optional;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.hbase.HBaseInterfaceAudience;
@ -56,7 +54,7 @@ public class BackupObserver implements RegionCoprocessor, RegionObserver {
@Override
public void postBulkLoadHFile(ObserverContext<RegionCoprocessorEnvironment> ctx,
List<Pair<byte[], String>> stagingFamilyPaths, Map<byte[], List<Path>> finalPaths)
throws IOException {
throws IOException {
Configuration cfg = ctx.getEnvironment().getConfiguration();
if (finalPaths == null) {
// there is no need to record state
@ -67,7 +65,7 @@ public class BackupObserver implements RegionCoprocessor, RegionObserver {
return;
}
try (Connection connection = ConnectionFactory.createConnection(cfg);
BackupSystemTable tbl = new BackupSystemTable(connection)) {
BackupSystemTable tbl = new BackupSystemTable(connection)) {
List<TableName> fullyBackedUpTables = tbl.getTablesForBackupType(BackupType.FULL);
RegionInfo info = ctx.getEnvironment().getRegionInfo();
TableName tableName = info.getTable();
@ -82,16 +80,17 @@ public class BackupObserver implements RegionCoprocessor, RegionObserver {
LOG.error("Failed to get tables which have been fully backed up", ioe);
}
}
@Override
public void preCommitStoreFile(final ObserverContext<RegionCoprocessorEnvironment> ctx,
final byte[] family, final List<Pair<Path, Path>> pairs) throws IOException {
final byte[] family, final List<Pair<Path, Path>> pairs) throws IOException {
Configuration cfg = ctx.getEnvironment().getConfiguration();
if (pairs == null || pairs.isEmpty() || !BackupManager.isBackupEnabled(cfg)) {
LOG.debug("skipping recording bulk load in preCommitStoreFile since backup is disabled");
return;
}
try (Connection connection = ConnectionFactory.createConnection(cfg);
BackupSystemTable tbl = new BackupSystemTable(connection)) {
BackupSystemTable tbl = new BackupSystemTable(connection)) {
List<TableName> fullyBackedUpTables = tbl.getTablesForBackupType(BackupType.FULL);
RegionInfo info = ctx.getEnvironment().getRegionInfo();
TableName tableName = info.getTable();

View File

@ -1,4 +1,4 @@
/**
/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
@ -15,11 +15,9 @@
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.hadoop.hbase.backup;
import java.util.List;
import org.apache.hadoop.hbase.TableName;
import org.apache.yetus.audience.InterfaceAudience;

View File

@ -1,4 +1,4 @@
/**
/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
@ -15,7 +15,6 @@
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.hadoop.hbase.backup;
import org.apache.hadoop.hbase.HConstants;
@ -45,14 +44,14 @@ public interface BackupRestoreConstants {
int DEFAULT_BACKUP_ATTEMPTS_PAUSE_MS = 10000;
/*
* Drivers option list
* Drivers option list
*/
String OPTION_OVERWRITE = "o";
String OPTION_OVERWRITE_DESC = "Overwrite data if any of the restore target tables exists";
String OPTION_CHECK = "c";
String OPTION_CHECK_DESC =
"Check restore sequence and dependencies only (does not execute the command)";
"Check restore sequence and dependencies only (does not execute the command)";
String OPTION_SET = "s";
String OPTION_SET_DESC = "Backup set name";
@ -62,8 +61,8 @@ public interface BackupRestoreConstants {
String OPTION_DEBUG_DESC = "Enable debug loggings";
String OPTION_TABLE = "t";
String OPTION_TABLE_DESC = "Table name. If specified, only backup images,"
+ " which contain this table will be listed.";
String OPTION_TABLE_DESC =
"Table name. If specified, only backup images," + " which contain this table will be listed.";
String OPTION_LIST = "l";
String OPTION_TABLE_LIST_DESC = "Table name list, comma-separated.";
@ -84,37 +83,32 @@ public interface BackupRestoreConstants {
String OPTION_KEEP = "k";
String OPTION_KEEP_DESC = "Specifies maximum age of backup (in days) to keep during bulk delete";
String OPTION_TABLE_MAPPING = "m";
String OPTION_TABLE_MAPPING_DESC =
"A comma separated list of target tables. "
+ "If specified, each table in <tables> must have a mapping";
String OPTION_TABLE_MAPPING_DESC = "A comma separated list of target tables. "
+ "If specified, each table in <tables> must have a mapping";
String OPTION_YARN_QUEUE_NAME = "q";
String OPTION_YARN_QUEUE_NAME_DESC = "Yarn queue name to run backup create command on";
String OPTION_YARN_QUEUE_NAME_RESTORE_DESC = "Yarn queue name to run backup restore command on";
String JOB_NAME_CONF_KEY = "mapreduce.job.name";
String BACKUP_CONFIG_STRING = BackupRestoreConstants.BACKUP_ENABLE_KEY
+ "=true\n"
+ "hbase.master.logcleaner.plugins="
+"YOUR_PLUGINS,org.apache.hadoop.hbase.backup.master.BackupLogCleaner\n"
+ "hbase.procedure.master.classes=YOUR_CLASSES,"
+"org.apache.hadoop.hbase.backup.master.LogRollMasterProcedureManager\n"
+ "hbase.procedure.regionserver.classes=YOUR_CLASSES,"
+ "org.apache.hadoop.hbase.backup.regionserver.LogRollRegionServerProcedureManager\n"
+ "hbase.coprocessor.region.classes=YOUR_CLASSES,"
+ "org.apache.hadoop.hbase.backup.BackupObserver\n"
+ "and restart the cluster\n"
+ "For more information please see http://hbase.apache.org/book.html#backuprestore\n";
String ENABLE_BACKUP = "Backup is not enabled. To enable backup, "+
"in hbase-site.xml, set:\n "
+ BACKUP_CONFIG_STRING;
String BACKUP_CONFIG_STRING =
BackupRestoreConstants.BACKUP_ENABLE_KEY + "=true\n" + "hbase.master.logcleaner.plugins="
+ "YOUR_PLUGINS,org.apache.hadoop.hbase.backup.master.BackupLogCleaner\n"
+ "hbase.procedure.master.classes=YOUR_CLASSES,"
+ "org.apache.hadoop.hbase.backup.master.LogRollMasterProcedureManager\n"
+ "hbase.procedure.regionserver.classes=YOUR_CLASSES,"
+ "org.apache.hadoop.hbase.backup.regionserver.LogRollRegionServerProcedureManager\n"
+ "hbase.coprocessor.region.classes=YOUR_CLASSES,"
+ "org.apache.hadoop.hbase.backup.BackupObserver\n" + "and restart the cluster\n"
+ "For more information please see http://hbase.apache.org/book.html#backuprestore\n";
String ENABLE_BACKUP = "Backup is not enabled. To enable backup, " + "in hbase-site.xml, set:\n "
+ BACKUP_CONFIG_STRING;
String VERIFY_BACKUP = "To enable backup, in hbase-site.xml, set:\n " + BACKUP_CONFIG_STRING;
/*
* Delimiter in table name list in restore command
* Delimiter in table name list in restore command
*/
String TABLENAME_DELIMITER_IN_COMMAND = ",";
@ -123,7 +117,24 @@ public interface BackupRestoreConstants {
String BACKUPID_PREFIX = "backup_";
enum BackupCommand {
CREATE, CANCEL, DELETE, DESCRIBE, HISTORY, STATUS, CONVERT, MERGE, STOP, SHOW, HELP, PROGRESS,
SET, SET_ADD, SET_REMOVE, SET_DELETE, SET_DESCRIBE, SET_LIST, REPAIR
CREATE,
CANCEL,
DELETE,
DESCRIBE,
HISTORY,
STATUS,
CONVERT,
MERGE,
STOP,
SHOW,
HELP,
PROGRESS,
SET,
SET_ADD,
SET_REMOVE,
SET_DELETE,
SET_DESCRIBE,
SET_LIST,
REPAIR
}
}

View File

@ -1,4 +1,4 @@
/**
/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
@ -26,7 +26,6 @@ import org.apache.yetus.audience.InterfaceAudience;
/**
* Factory implementation for backup/restore related jobs
*
*/
@InterfaceAudience.Private
public final class BackupRestoreFactory {
@ -45,7 +44,7 @@ public final class BackupRestoreFactory {
*/
public static RestoreJob getRestoreJob(Configuration conf) {
Class<? extends RestoreJob> cls =
conf.getClass(HBASE_INCR_RESTORE_IMPL_CLASS, MapReduceRestoreJob.class, RestoreJob.class);
conf.getClass(HBASE_INCR_RESTORE_IMPL_CLASS, MapReduceRestoreJob.class, RestoreJob.class);
RestoreJob service = ReflectionUtils.newInstance(cls, conf);
service.setConf(conf);
return service;
@ -57,9 +56,8 @@ public final class BackupRestoreFactory {
* @return backup copy job instance
*/
public static BackupCopyJob getBackupCopyJob(Configuration conf) {
Class<? extends BackupCopyJob> cls =
conf.getClass(HBASE_BACKUP_COPY_IMPL_CLASS, MapReduceBackupCopyJob.class,
BackupCopyJob.class);
Class<? extends BackupCopyJob> cls = conf.getClass(HBASE_BACKUP_COPY_IMPL_CLASS,
MapReduceBackupCopyJob.class, BackupCopyJob.class);
BackupCopyJob service = ReflectionUtils.newInstance(cls, conf);
service.setConf(conf);
return service;
@ -71,9 +69,8 @@ public final class BackupRestoreFactory {
* @return backup merge job instance
*/
public static BackupMergeJob getBackupMergeJob(Configuration conf) {
Class<? extends BackupMergeJob> cls =
conf.getClass(HBASE_BACKUP_MERGE_IMPL_CLASS, MapReduceBackupMergeJob.class,
BackupMergeJob.class);
Class<? extends BackupMergeJob> cls = conf.getClass(HBASE_BACKUP_MERGE_IMPL_CLASS,
MapReduceBackupMergeJob.class, BackupMergeJob.class);
BackupMergeJob service = ReflectionUtils.newInstance(cls, conf);
service.setConf(conf);
return service;

View File

@ -1,4 +1,4 @@
/**
/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
@ -15,11 +15,11 @@
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.hadoop.hbase.backup;
import org.apache.hadoop.hbase.TableName;
import org.apache.yetus.audience.InterfaceAudience;
import org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil;
import org.apache.hadoop.hbase.shaded.protobuf.generated.BackupProtos;
@ -29,14 +29,14 @@ import org.apache.hadoop.hbase.shaded.protobuf.generated.BackupProtos;
*/
@InterfaceAudience.Private
public class BackupTableInfo {
public class BackupTableInfo {
/*
* Table name for backup
* Table name for backup
*/
private TableName table;
/*
* Snapshot name for offline/online snapshot
* Snapshot name for offline/online snapshot
*/
private String snapshotName = null;

View File

@ -1,14 +1,13 @@
/**
*
/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
@ -16,12 +15,10 @@
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.hadoop.hbase.backup;
import java.io.IOException;
import java.util.HashMap;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;
@ -52,15 +49,15 @@ public final class HBackupFileSystem {
* "hdfs://backup.hbase.org:9000/user/biadmin/backup/backup_1396650096738/default/t1_dn/", where
* "hdfs://backup.hbase.org:9000/user/biadmin/backup" is a backup root directory
* @param backupRootDir backup root directory
* @param backupId backup id
* @param tableName table name
* @param backupId backup id
* @param tableName table name
* @return backupPath String for the particular table
*/
public static String
getTableBackupDir(String backupRootDir, String backupId, TableName tableName) {
public static String getTableBackupDir(String backupRootDir, String backupId,
TableName tableName) {
return backupRootDir + Path.SEPARATOR + backupId + Path.SEPARATOR
+ tableName.getNamespaceAsString() + Path.SEPARATOR + tableName.getQualifierAsString()
+ Path.SEPARATOR;
+ tableName.getNamespaceAsString() + Path.SEPARATOR + tableName.getQualifierAsString()
+ Path.SEPARATOR;
}
/**
@ -75,7 +72,7 @@ public final class HBackupFileSystem {
/**
* Get backup tmp directory for backupId
* @param backupRoot backup root
* @param backupId backup id
* @param backupId backup id
* @return backup tmp directory path
*/
public static Path getBackupTmpDirPathForBackupId(String backupRoot, String backupId) {
@ -83,7 +80,7 @@ public final class HBackupFileSystem {
}
public static String getTableBackupDataDir(String backupRootDir, String backupId,
TableName tableName) {
TableName tableName) {
return getTableBackupDir(backupRootDir, backupId, tableName) + Path.SEPARATOR + "data";
}
@ -97,8 +94,8 @@ public final class HBackupFileSystem {
* "hdfs://backup.hbase.org:9000/user/biadmin/backup/backup_1396650096738/default/t1_dn/", where
* "hdfs://backup.hbase.org:9000/user/biadmin/backup" is a backup root directory
* @param backupRootPath backup root path
* @param tableName table name
* @param backupId backup Id
* @param tableName table name
* @param backupId backup Id
* @return backupPath for the particular table
*/
public static Path getTableBackupPath(TableName tableName, Path backupRootPath, String backupId) {
@ -109,12 +106,12 @@ public final class HBackupFileSystem {
* Given the backup root dir and the backup id, return the log file location for an incremental
* backup.
* @param backupRootDir backup root directory
* @param backupId backup id
* @param backupId backup id
* @return logBackupDir: ".../user/biadmin/backup/WALs/backup_1396650096738"
*/
public static String getLogBackupDir(String backupRootDir, String backupId) {
return backupRootDir + Path.SEPARATOR + backupId + Path.SEPARATOR
+ HConstants.HREGION_LOGDIR_NAME;
+ HConstants.HREGION_LOGDIR_NAME;
}
public static Path getLogBackupPath(String backupRootDir, String backupId) {
@ -124,37 +121,35 @@ public final class HBackupFileSystem {
// TODO we do not keep WAL files anymore
// Move manifest file to other place
private static Path getManifestPath(Configuration conf, Path backupRootPath, String backupId)
throws IOException {
throws IOException {
FileSystem fs = backupRootPath.getFileSystem(conf);
Path manifestPath =
new Path(getBackupPath(backupRootPath.toString(), backupId) + Path.SEPARATOR
+ BackupManifest.MANIFEST_FILE_NAME);
Path manifestPath = new Path(getBackupPath(backupRootPath.toString(), backupId) + Path.SEPARATOR
+ BackupManifest.MANIFEST_FILE_NAME);
if (!fs.exists(manifestPath)) {
String errorMsg =
"Could not find backup manifest " + BackupManifest.MANIFEST_FILE_NAME + " for "
+ backupId + ". File " + manifestPath + " does not exists. Did " + backupId
+ " correspond to previously taken backup ?";
String errorMsg = "Could not find backup manifest " + BackupManifest.MANIFEST_FILE_NAME
+ " for " + backupId + ". File " + manifestPath + " does not exists. Did " + backupId
+ " correspond to previously taken backup ?";
throw new IOException(errorMsg);
}
return manifestPath;
}
public static BackupManifest
getManifest(Configuration conf, Path backupRootPath, String backupId) throws IOException {
public static BackupManifest getManifest(Configuration conf, Path backupRootPath, String backupId)
throws IOException {
BackupManifest manifest =
new BackupManifest(conf, getManifestPath(conf, backupRootPath, backupId));
new BackupManifest(conf, getManifestPath(conf, backupRootPath, backupId));
return manifest;
}
/**
* Check whether the backup image path and there is manifest file in the path.
* @param backupManifestMap If all the manifests are found, then they are put into this map
* @param tableArray the tables involved
* @param tableArray the tables involved
* @throws IOException exception
*/
public static void checkImageManifestExist(HashMap<TableName, BackupManifest> backupManifestMap,
TableName[] tableArray, Configuration conf, Path backupRootPath, String backupId)
throws IOException {
TableName[] tableArray, Configuration conf, Path backupRootPath, String backupId)
throws IOException {
for (TableName tableName : tableArray) {
BackupManifest manifest = getManifest(conf, backupRootPath, backupId);
backupManifestMap.put(tableName, manifest);

View File

@ -1,4 +1,4 @@
/**
/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
@ -59,9 +59,7 @@ import org.apache.hbase.thirdparty.org.apache.commons.cli.CommandLine;
import org.apache.hbase.thirdparty.org.apache.commons.cli.HelpFormatter;
/**
*
* Command-line entry point for restore operation
*
*/
@InterfaceAudience.Private
public class RestoreDriver extends AbstractHBaseTool {
@ -69,10 +67,10 @@ public class RestoreDriver extends AbstractHBaseTool {
private CommandLine cmd;
private static final String USAGE_STRING =
"Usage: hbase restore <backup_path> <backup_id> [options]\n"
+ " backup_path Path to a backup destination root\n"
+ " backup_id Backup image ID to restore\n"
+ " table(s) Comma-separated list of tables to restore\n";
"Usage: hbase restore <backup_path> <backup_id> [options]\n"
+ " backup_path Path to a backup destination root\n"
+ " backup_id Backup image ID to restore\n"
+ " table(s) Comma-separated list of tables to restore\n";
private static final String USAGE_FOOTER = "";
@ -101,19 +99,19 @@ public class RestoreDriver extends AbstractHBaseTool {
boolean overwrite = cmd.hasOption(OPTION_OVERWRITE);
if (overwrite) {
LOG.debug("Found -overwrite option in restore command, "
+ "will overwrite to existing table if any in the restore target");
+ "will overwrite to existing table if any in the restore target");
}
// whether to only check the dependencies, false by default
boolean check = cmd.hasOption(OPTION_CHECK);
if (check) {
LOG.debug("Found -check option in restore command, "
+ "will check and verify the dependencies");
LOG.debug(
"Found -check option in restore command, " + "will check and verify the dependencies");
}
if (cmd.hasOption(OPTION_SET) && cmd.hasOption(OPTION_TABLE)) {
System.err.println("Options -s and -t are mutaully exclusive,"+
" you can not specify both of them.");
System.err.println(
"Options -s and -t are mutaully exclusive," + " you can not specify both of them.");
printToolUsage();
return -1;
}
@ -141,9 +139,9 @@ public class RestoreDriver extends AbstractHBaseTool {
String backupId = remainArgs[1];
String tables;
String tableMapping =
cmd.hasOption(OPTION_TABLE_MAPPING) ? cmd.getOptionValue(OPTION_TABLE_MAPPING) : null;
cmd.hasOption(OPTION_TABLE_MAPPING) ? cmd.getOptionValue(OPTION_TABLE_MAPPING) : null;
try (final Connection conn = ConnectionFactory.createConnection(conf);
BackupAdmin client = new BackupAdminImpl(conn)) {
BackupAdmin client = new BackupAdminImpl(conn)) {
// Check backup set
if (cmd.hasOption(OPTION_SET)) {
String setName = cmd.getOptionValue(OPTION_SET);
@ -155,8 +153,8 @@ public class RestoreDriver extends AbstractHBaseTool {
return -2;
}
if (tables == null) {
System.out.println("ERROR: Backup set '" + setName
+ "' is either empty or does not exist");
System.out
.println("ERROR: Backup set '" + setName + "' is either empty or does not exist");
printToolUsage();
return -3;
}
@ -167,15 +165,16 @@ public class RestoreDriver extends AbstractHBaseTool {
TableName[] sTableArray = BackupUtils.parseTableNames(tables);
TableName[] tTableArray = BackupUtils.parseTableNames(tableMapping);
if (sTableArray != null && tTableArray != null &&
(sTableArray.length != tTableArray.length)) {
if (
sTableArray != null && tTableArray != null && (sTableArray.length != tTableArray.length)
) {
System.out.println("ERROR: table mapping mismatch: " + tables + " : " + tableMapping);
printToolUsage();
return -4;
}
client.restore(BackupUtils.createRestoreRequest(backupRootDir, backupId, check,
sTableArray, tTableArray, overwrite));
client.restore(BackupUtils.createRestoreRequest(backupRootDir, backupId, check, sTableArray,
tTableArray, overwrite));
} catch (Exception e) {
LOG.error("Error while running restore backup", e);
return -5;
@ -184,7 +183,7 @@ public class RestoreDriver extends AbstractHBaseTool {
}
private String getTablesForSet(Connection conn, String name, Configuration conf)
throws IOException {
throws IOException {
try (final BackupSystemTable table = new BackupSystemTable(conn)) {
List<TableName> tables = table.describeBackupSet(name);

View File

@ -1,4 +1,4 @@
/**
/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
@ -15,11 +15,9 @@
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.hadoop.hbase.backup;
import java.io.IOException;
import org.apache.hadoop.conf.Configurable;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.hbase.TableName;
@ -34,12 +32,12 @@ import org.apache.yetus.audience.InterfaceAudience;
public interface RestoreJob extends Configurable {
/**
* Run restore operation
* @param dirPaths path array of WAL log directories
* @param fromTables from tables
* @param toTables to tables
* @param dirPaths path array of WAL log directories
* @param fromTables from tables
* @param toTables to tables
* @param fullBackupRestore full backup restore
* @throws IOException if running the job fails
*/
void run(Path[] dirPaths, TableName[] fromTables, TableName[] toTables,
boolean fullBackupRestore) throws IOException;
void run(Path[] dirPaths, TableName[] fromTables, TableName[] toTables, boolean fullBackupRestore)
throws IOException;
}

View File

@ -1,4 +1,4 @@
/**
/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information

View File

@ -1,4 +1,4 @@
/**
/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
@ -25,7 +25,6 @@ import java.util.HashSet;
import java.util.List;
import java.util.Map;
import java.util.Set;
import org.apache.commons.lang3.StringUtils;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FileSystem;
@ -57,7 +56,7 @@ import org.apache.hbase.thirdparty.com.google.common.collect.Lists;
public class BackupAdminImpl implements BackupAdmin {
public final static String CHECK_OK = "Checking backup images: OK";
public final static String CHECK_FAILED =
"Checking backup images: Failed. Some dependencies are missing for restore";
"Checking backup images: Failed. Some dependencies are missing for restore";
private static final Logger LOG = LoggerFactory.getLogger(BackupAdminImpl.class);
private final Connection conn;
@ -107,8 +106,8 @@ public class BackupAdminImpl implements BackupAdmin {
deleteSessionStarted = true;
} catch (IOException e) {
LOG.warn("You can not run delete command while active backup session is in progress. \n"
+ "If there is no active backup session running, run backup repair utility to "
+ "restore \nbackup system integrity.");
+ "If there is no active backup session running, run backup repair utility to "
+ "restore \nbackup system integrity.");
return -1;
}
@ -158,7 +157,7 @@ public class BackupAdminImpl implements BackupAdmin {
BackupSystemTable.deleteSnapshot(conn);
// We still have record with unfinished delete operation
LOG.error("Delete operation failed, please run backup repair utility to restore "
+ "backup system integrity", e);
+ "backup system integrity", e);
throw e;
} else {
LOG.warn("Delete operation succeeded, there were some errors: ", e);
@ -177,15 +176,15 @@ public class BackupAdminImpl implements BackupAdmin {
/**
* Updates incremental backup set for every backupRoot
* @param tablesMap map [backupRoot: {@code Set<TableName>}]
* @param table backup system table
* @param table backup system table
* @throws IOException if a table operation fails
*/
private void finalizeDelete(Map<String, HashSet<TableName>> tablesMap, BackupSystemTable table)
throws IOException {
throws IOException {
for (String backupRoot : tablesMap.keySet()) {
Set<TableName> incrTableSet = table.getIncrementalBackupTableSet(backupRoot);
Map<TableName, ArrayList<BackupInfo>> tableMap =
table.getBackupHistoryForTableSet(incrTableSet, backupRoot);
table.getBackupHistoryForTableSet(incrTableSet, backupRoot);
for (Map.Entry<TableName, ArrayList<BackupInfo>> entry : tableMap.entrySet()) {
if (entry.getValue() == null) {
// No more backups for a table
@ -283,10 +282,10 @@ public class BackupAdminImpl implements BackupAdmin {
}
private void removeTableFromBackupImage(BackupInfo info, TableName tn, BackupSystemTable sysTable)
throws IOException {
throws IOException {
List<TableName> tables = info.getTableNames();
LOG.debug("Remove " + tn + " from " + info.getBackupId() + " tables="
+ info.getTableListAsString());
LOG.debug(
"Remove " + tn + " from " + info.getBackupId() + " tables=" + info.getTableListAsString());
if (tables.contains(tn)) {
tables.remove(tn);
@ -306,7 +305,7 @@ public class BackupAdminImpl implements BackupAdmin {
}
private List<BackupInfo> getAffectedBackupSessions(BackupInfo backupInfo, TableName tn,
BackupSystemTable table) throws IOException {
BackupSystemTable table) throws IOException {
LOG.debug("GetAffectedBackupInfos for: " + backupInfo.getBackupId() + " table=" + tn);
long ts = backupInfo.getStartTs();
List<BackupInfo> list = new ArrayList<>();
@ -325,7 +324,7 @@ public class BackupAdminImpl implements BackupAdmin {
list.clear();
} else {
LOG.debug("GetAffectedBackupInfos for: " + backupInfo.getBackupId() + " table=" + tn
+ " added " + info.getBackupId() + " tables=" + info.getTableListAsString());
+ " added " + info.getBackupId() + " tables=" + info.getTableListAsString());
list.add(info);
}
}
@ -338,7 +337,7 @@ public class BackupAdminImpl implements BackupAdmin {
* @throws IOException if cleaning up the backup directory fails
*/
private void cleanupBackupDir(BackupInfo backupInfo, TableName table, Configuration conf)
throws IOException {
throws IOException {
try {
// clean up the data at target directory
String targetDir = backupInfo.getBackupRootDir();
@ -349,9 +348,8 @@ public class BackupAdminImpl implements BackupAdmin {
FileSystem outputFs = FileSystem.get(new Path(backupInfo.getBackupRootDir()).toUri(), conf);
Path targetDirPath =
new Path(BackupUtils.getTableBackupDir(backupInfo.getBackupRootDir(),
backupInfo.getBackupId(), table));
Path targetDirPath = new Path(BackupUtils.getTableBackupDir(backupInfo.getBackupRootDir(),
backupInfo.getBackupId(), table));
if (outputFs.delete(targetDirPath, true)) {
LOG.info("Cleaning up backup data at " + targetDirPath.toString() + " done.");
} else {
@ -359,13 +357,13 @@ public class BackupAdminImpl implements BackupAdmin {
}
} catch (IOException e1) {
LOG.error("Cleaning up backup data of " + backupInfo.getBackupId() + " for table " + table
+ "at " + backupInfo.getBackupRootDir() + " failed due to " + e1.getMessage() + ".");
+ "at " + backupInfo.getBackupRootDir() + " failed due to " + e1.getMessage() + ".");
throw e1;
}
}
private boolean isLastBackupSession(BackupSystemTable table, TableName tn, long startTime)
throws IOException {
throws IOException {
List<BackupInfo> history = table.getBackupHistory();
for (BackupInfo info : history) {
List<TableName> tables = info.getTableNames();
@ -466,7 +464,7 @@ public class BackupAdminImpl implements BackupAdmin {
public void addToBackupSet(String name, TableName[] tables) throws IOException {
String[] tableNames = new String[tables.length];
try (final BackupSystemTable table = new BackupSystemTable(conn);
final Admin admin = conn.getAdmin()) {
final Admin admin = conn.getAdmin()) {
for (int i = 0; i < tables.length; i++) {
tableNames[i] = tables[i].getNameAsString();
if (!admin.tableExists(TableName.valueOf(tableNames[i]))) {
@ -474,8 +472,8 @@ public class BackupAdminImpl implements BackupAdmin {
}
}
table.addToBackupSet(name, tableNames);
LOG.info("Added tables [" + StringUtils.join(tableNames, " ") + "] to '" + name
+ "' backup set");
LOG.info(
"Added tables [" + StringUtils.join(tableNames, " ") + "] to '" + name + "' backup set");
}
}
@ -484,8 +482,8 @@ public class BackupAdminImpl implements BackupAdmin {
LOG.info("Removing tables [" + StringUtils.join(tables, " ") + "] from '" + name + "'");
try (final BackupSystemTable table = new BackupSystemTable(conn)) {
table.removeFromBackupSet(name, toStringArray(tables));
LOG.info("Removing tables [" + StringUtils.join(tables, " ") + "] from '" + name
+ "' completed.");
LOG.info(
"Removing tables [" + StringUtils.join(tables, " ") + "] from '" + name + "' completed.");
}
}
@ -534,9 +532,9 @@ public class BackupAdminImpl implements BackupAdmin {
}
if (incrTableSet.isEmpty()) {
String msg = "Incremental backup table set contains no tables. "
+ "You need to run full backup first "
+ (tableList != null ? "on " + StringUtils.join(tableList, ",") : "");
String msg =
"Incremental backup table set contains no tables. " + "You need to run full backup first "
+ (tableList != null ? "on " + StringUtils.join(tableList, ",") : "");
throw new IOException(msg);
}
@ -545,7 +543,7 @@ public class BackupAdminImpl implements BackupAdmin {
if (!tableList.isEmpty()) {
String extraTables = StringUtils.join(tableList, ",");
String msg = "Some tables (" + extraTables + ") haven't gone through full backup. "
+ "Perform full backup on " + extraTables + " first, " + "then retry the command";
+ "Perform full backup on " + extraTables + " first, " + "then retry the command";
throw new IOException(msg);
}
}
@ -554,13 +552,13 @@ public class BackupAdminImpl implements BackupAdmin {
if (tableList != null && !tableList.isEmpty()) {
for (TableName table : tableList) {
String targetTableBackupDir =
HBackupFileSystem.getTableBackupDir(targetRootDir, backupId, table);
HBackupFileSystem.getTableBackupDir(targetRootDir, backupId, table);
Path targetTableBackupDirPath = new Path(targetTableBackupDir);
FileSystem outputFs =
FileSystem.get(targetTableBackupDirPath.toUri(), conn.getConfiguration());
FileSystem.get(targetTableBackupDirPath.toUri(), conn.getConfiguration());
if (outputFs.exists(targetTableBackupDirPath)) {
throw new IOException("Target backup directory " + targetTableBackupDir
+ " exists already.");
throw new IOException(
"Target backup directory " + targetTableBackupDir + " exists already.");
}
outputFs.mkdirs(targetTableBackupDirPath);
}
@ -581,8 +579,8 @@ public class BackupAdminImpl implements BackupAdmin {
tableList = excludeNonExistingTables(tableList, nonExistingTableList);
} else {
// Throw exception only in full mode - we try to backup non-existing table
throw new IOException("Non-existing tables found in the table list: "
+ nonExistingTableList);
throw new IOException(
"Non-existing tables found in the table list: " + nonExistingTableList);
}
}
}
@ -590,9 +588,9 @@ public class BackupAdminImpl implements BackupAdmin {
// update table list
BackupRequest.Builder builder = new BackupRequest.Builder();
request = builder.withBackupType(request.getBackupType()).withTableList(tableList)
.withTargetRootDir(request.getTargetRootDir())
.withBackupSetName(request.getBackupSetName()).withTotalTasks(request.getTotalTasks())
.withBandwidthPerTasks((int) request.getBandwidth()).build();
.withTargetRootDir(request.getTargetRootDir()).withBackupSetName(request.getBackupSetName())
.withTotalTasks(request.getTotalTasks()).withBandwidthPerTasks((int) request.getBandwidth())
.build();
TableBackupClient client;
try {
@ -608,7 +606,7 @@ public class BackupAdminImpl implements BackupAdmin {
}
private List<TableName> excludeNonExistingTables(List<TableName> tableList,
List<TableName> nonExistingTableList) {
List<TableName> nonExistingTableList) {
for (TableName table : nonExistingTableList) {
tableList.remove(table);
}
@ -619,7 +617,7 @@ public class BackupAdminImpl implements BackupAdmin {
public void mergeBackups(String[] backupIds) throws IOException {
try (final BackupSystemTable sysTable = new BackupSystemTable(conn)) {
checkIfValidForMerge(backupIds, sysTable);
//TODO run job on remote cluster
// TODO run job on remote cluster
BackupMergeJob job = BackupRestoreFactory.getBackupMergeJob(conn.getConfiguration());
job.run(backupIds);
}
@ -627,7 +625,6 @@ public class BackupAdminImpl implements BackupAdmin {
/**
* Verifies that backup images are valid for merge.
*
* <ul>
* <li>All backups MUST be in the same destination
* <li>No FULL backups are allowed - only INCREMENTAL
@ -636,11 +633,11 @@ public class BackupAdminImpl implements BackupAdmin {
* </ul>
* <p>
* @param backupIds list of backup ids
* @param table backup system table
* @param table backup system table
* @throws IOException if the backup image is not valid for merge
*/
private void checkIfValidForMerge(String[] backupIds, BackupSystemTable table)
throws IOException {
throws IOException {
String backupRoot = null;
final Set<TableName> allTables = new HashSet<>();
@ -656,7 +653,7 @@ public class BackupAdminImpl implements BackupAdmin {
backupRoot = bInfo.getBackupRootDir();
} else if (!bInfo.getBackupRootDir().equals(backupRoot)) {
throw new IOException("Found different backup destinations in a list of a backup sessions "
+ "\n1. " + backupRoot + "\n" + "2. " + bInfo.getBackupRootDir());
+ "\n1. " + backupRoot + "\n" + "2. " + bInfo.getBackupRootDir());
}
if (bInfo.getType() == BackupType.FULL) {
throw new IOException("FULL backup image can not be merged for: \n" + bInfo);
@ -664,7 +661,7 @@ public class BackupAdminImpl implements BackupAdmin {
if (bInfo.getState() != BackupState.COMPLETE) {
throw new IOException("Backup image " + backupId
+ " can not be merged becuase of its state: " + bInfo.getState());
+ " can not be merged becuase of its state: " + bInfo.getState());
}
allBackups.add(backupId);
allTables.addAll(bInfo.getTableNames());
@ -677,7 +674,7 @@ public class BackupAdminImpl implements BackupAdmin {
}
}
final long startRangeTime = minTime;
final long startRangeTime = minTime;
final long endRangeTime = maxTime;
final String backupDest = backupRoot;
// Check we have no 'holes' in backup id list
@ -688,7 +685,7 @@ public class BackupAdminImpl implements BackupAdmin {
BackupInfo.Filter timeRangeFilter = info -> {
long time = info.getStartTs();
return time >= startRangeTime && time <= endRangeTime ;
return time >= startRangeTime && time <= endRangeTime;
};
BackupInfo.Filter tableFilter = info -> {
@ -699,20 +696,20 @@ public class BackupAdminImpl implements BackupAdmin {
BackupInfo.Filter typeFilter = info -> info.getType() == BackupType.INCREMENTAL;
BackupInfo.Filter stateFilter = info -> info.getState() == BackupState.COMPLETE;
List<BackupInfo> allInfos = table.getBackupHistory(-1, destinationFilter,
timeRangeFilter, tableFilter, typeFilter, stateFilter);
List<BackupInfo> allInfos = table.getBackupHistory(-1, destinationFilter, timeRangeFilter,
tableFilter, typeFilter, stateFilter);
if (allInfos.size() != allBackups.size()) {
// Yes we have at least one hole in backup image sequence
// Yes we have at least one hole in backup image sequence
List<String> missingIds = new ArrayList<>();
for(BackupInfo info: allInfos) {
if(allBackups.contains(info.getBackupId())) {
for (BackupInfo info : allInfos) {
if (allBackups.contains(info.getBackupId())) {
continue;
}
missingIds.add(info.getBackupId());
}
String errMsg =
"Sequence of backup ids has 'holes'. The following backup images must be added:" +
org.apache.hadoop.util.StringUtils.join(",", missingIds);
"Sequence of backup ids has 'holes'. The following backup images must be added:"
+ org.apache.hadoop.util.StringUtils.join(",", missingIds);
throw new IOException(errMsg);
}
}

View File

@ -1,4 +1,4 @@
/**
/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
@ -15,7 +15,6 @@
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.hadoop.hbase.backup.impl;
import static org.apache.hadoop.hbase.backup.BackupRestoreConstants.OPTION_BACKUP_LIST_DESC;
@ -44,7 +43,6 @@ import static org.apache.hadoop.hbase.backup.BackupRestoreConstants.OPTION_YARN_
import java.io.IOException;
import java.net.URI;
import java.util.List;
import org.apache.commons.lang3.StringUtils;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.conf.Configured;
@ -80,33 +78,32 @@ public final class BackupCommands {
public final static String INCORRECT_USAGE = "Incorrect usage";
public final static String TOP_LEVEL_NOT_ALLOWED =
"Top level (root) folder is not allowed to be a backup destination";
"Top level (root) folder is not allowed to be a backup destination";
public static final String USAGE = "Usage: hbase backup COMMAND [command-specific arguments]\n"
+ "where COMMAND is one of:\n" + " create create a new backup image\n"
+ " delete delete an existing backup image\n"
+ " describe show the detailed information of a backup image\n"
+ " history show history of all successful backups\n"
+ " progress show the progress of the latest backup request\n"
+ " set backup set management\n"
+ " repair repair backup system table\n"
+ " merge merge backup images\n"
+ "Run \'hbase backup COMMAND -h\' to see help message for each command\n";
+ "where COMMAND is one of:\n" + " create create a new backup image\n"
+ " delete delete an existing backup image\n"
+ " describe show the detailed information of a backup image\n"
+ " history show history of all successful backups\n"
+ " progress show the progress of the latest backup request\n"
+ " set backup set management\n" + " repair repair backup system table\n"
+ " merge merge backup images\n"
+ "Run \'hbase backup COMMAND -h\' to see help message for each command\n";
public static final String CREATE_CMD_USAGE =
"Usage: hbase backup create <type> <backup_path> [options]\n"
+ " type \"full\" to create a full backup image\n"
+ " \"incremental\" to create an incremental backup image\n"
+ " backup_path Full path to store the backup image\n";
"Usage: hbase backup create <type> <backup_path> [options]\n"
+ " type \"full\" to create a full backup image\n"
+ " \"incremental\" to create an incremental backup image\n"
+ " backup_path Full path to store the backup image\n";
public static final String PROGRESS_CMD_USAGE = "Usage: hbase backup progress <backup_id>\n"
+ " backup_id Backup image id (optional). If no id specified, the command will show\n"
+ " progress for currently running backup session.";
+ " backup_id Backup image id (optional). If no id specified, the command will show\n"
+ " progress for currently running backup session.";
public static final String NO_INFO_FOUND = "No info was found for backup id: ";
public static final String NO_ACTIVE_SESSION_FOUND = "No active backup sessions found.";
public static final String DESCRIBE_CMD_USAGE = "Usage: hbase backup describe <backup_id>\n"
+ " backup_id Backup image id\n";
public static final String DESCRIBE_CMD_USAGE =
"Usage: hbase backup describe <backup_id>\n" + " backup_id Backup image id\n";
public static final String HISTORY_CMD_USAGE = "Usage: hbase backup history [options]";
@ -115,14 +112,13 @@ public final class BackupCommands {
public static final String REPAIR_CMD_USAGE = "Usage: hbase backup repair\n";
public static final String SET_CMD_USAGE = "Usage: hbase backup set COMMAND [name] [tables]\n"
+ " name Backup set name\n"
+ " tables Comma separated list of tables.\n" + "COMMAND is one of:\n"
+ " add add tables to a set, create a set if needed\n"
+ " remove remove tables from a set\n"
+ " list list all backup sets in the system\n"
+ " describe describe set\n" + " delete delete backup set\n";
+ " name Backup set name\n" + " tables Comma separated list of tables.\n"
+ "COMMAND is one of:\n" + " add add tables to a set, create a set if needed\n"
+ " remove remove tables from a set\n"
+ " list list all backup sets in the system\n" + " describe describe set\n"
+ " delete delete backup set\n";
public static final String MERGE_CMD_USAGE = "Usage: hbase backup merge [backup_ids]\n"
+ " backup_ids Comma separated list of backup image ids.\n";
+ " backup_ids Comma separated list of backup image ids.\n";
public static final String USAGE_FOOTER = "";
@ -281,8 +277,10 @@ public final class BackupCommands {
throw new IOException(INCORRECT_USAGE);
}
if (!BackupType.FULL.toString().equalsIgnoreCase(args[1])
&& !BackupType.INCREMENTAL.toString().equalsIgnoreCase(args[1])) {
if (
!BackupType.FULL.toString().equalsIgnoreCase(args[1])
&& !BackupType.INCREMENTAL.toString().equalsIgnoreCase(args[1])
) {
System.out.println("ERROR: invalid backup type: " + args[1]);
printUsage();
throw new IOException(INCORRECT_USAGE);
@ -301,8 +299,8 @@ public final class BackupCommands {
// Check if we have both: backup set and list of tables
if (cmdline.hasOption(OPTION_TABLE) && cmdline.hasOption(OPTION_SET)) {
System.out.println("ERROR: You can specify either backup set or list"
+ " of tables, but not both");
System.out
.println("ERROR: You can specify either backup set or list" + " of tables, but not both");
printUsage();
throw new IOException(INCORRECT_USAGE);
}
@ -315,20 +313,20 @@ public final class BackupCommands {
tables = getTablesForSet(setName, getConf());
if (tables == null) {
System.out.println("ERROR: Backup set '" + setName
+ "' is either empty or does not exist");
System.out
.println("ERROR: Backup set '" + setName + "' is either empty or does not exist");
printUsage();
throw new IOException(INCORRECT_USAGE);
}
} else {
tables = cmdline.getOptionValue(OPTION_TABLE);
}
int bandwidth =
cmdline.hasOption(OPTION_BANDWIDTH) ? Integer.parseInt(cmdline
.getOptionValue(OPTION_BANDWIDTH)) : -1;
int workers =
cmdline.hasOption(OPTION_WORKERS) ? Integer.parseInt(cmdline
.getOptionValue(OPTION_WORKERS)) : -1;
int bandwidth = cmdline.hasOption(OPTION_BANDWIDTH)
? Integer.parseInt(cmdline.getOptionValue(OPTION_BANDWIDTH))
: -1;
int workers = cmdline.hasOption(OPTION_WORKERS)
? Integer.parseInt(cmdline.getOptionValue(OPTION_WORKERS))
: -1;
if (cmdline.hasOption(OPTION_YARN_QUEUE_NAME)) {
String queueName = cmdline.getOptionValue(OPTION_YARN_QUEUE_NAME);
@ -338,13 +336,11 @@ public final class BackupCommands {
try (BackupAdminImpl admin = new BackupAdminImpl(conn)) {
BackupRequest.Builder builder = new BackupRequest.Builder();
BackupRequest request =
builder
.withBackupType(BackupType.valueOf(args[1].toUpperCase()))
.withTableList(
tables != null ? Lists.newArrayList(BackupUtils.parseTableNames(tables)) : null)
.withTargetRootDir(targetBackupDir).withTotalTasks(workers)
.withBandwidthPerTasks(bandwidth).withBackupSetName(setName).build();
BackupRequest request = builder.withBackupType(BackupType.valueOf(args[1].toUpperCase()))
.withTableList(
tables != null ? Lists.newArrayList(BackupUtils.parseTableNames(tables)) : null)
.withTargetRootDir(targetBackupDir).withTotalTasks(workers)
.withBandwidthPerTasks(bandwidth).withBackupSetName(setName).build();
String backupId = admin.backupTables(request);
System.out.println("Backup session " + backupId + " finished. Status: SUCCESS");
} catch (IOException e) {
@ -506,8 +502,8 @@ public final class BackupCommands {
public void execute() throws IOException {
if (cmdline == null || cmdline.getArgs() == null || cmdline.getArgs().length == 1) {
System.out.println("No backup id was specified, "
+ "will retrieve the most recent (ongoing) session");
System.out.println(
"No backup id was specified, " + "will retrieve the most recent (ongoing) session");
}
String[] args = cmdline == null ? null : cmdline.getArgs();
if (args != null && args.length > 2) {
@ -601,15 +597,15 @@ public final class BackupCommands {
};
List<BackupInfo> history = null;
try (final BackupSystemTable sysTable = new BackupSystemTable(conn);
BackupAdminImpl admin = new BackupAdminImpl(conn)) {
BackupAdminImpl admin = new BackupAdminImpl(conn)) {
history = sysTable.getBackupHistory(-1, dateFilter);
String[] backupIds = convertToBackupIds(history);
int deleted = admin.deleteBackups(backupIds);
System.out.println("Deleted " + deleted + " backups. Total older than " + days + " days: "
+ backupIds.length);
+ backupIds.length);
} catch (IOException e) {
System.err.println("Delete command FAILED. Please run backup repair tool to restore backup "
+ "system integrity");
+ "system integrity");
throw e;
}
}
@ -631,7 +627,7 @@ public final class BackupCommands {
System.out.println("Deleted " + deleted + " backups. Total requested: " + backupIds.length);
} catch (IOException e) {
System.err.println("Delete command FAILED. Please run backup repair tool to restore backup "
+ "system integrity");
+ "system integrity");
throw e;
}
@ -673,14 +669,14 @@ public final class BackupCommands {
Configuration conf = getConf() != null ? getConf() : HBaseConfiguration.create();
try (final Connection conn = ConnectionFactory.createConnection(conf);
final BackupSystemTable sysTable = new BackupSystemTable(conn)) {
final BackupSystemTable sysTable = new BackupSystemTable(conn)) {
// Failed backup
BackupInfo backupInfo;
List<BackupInfo> list = sysTable.getBackupInfos(BackupState.RUNNING);
if (list.size() == 0) {
// No failed sessions found
System.out.println("REPAIR status: no failed sessions found."
+ " Checking failed delete backup operation ...");
+ " Checking failed delete backup operation ...");
repairFailedBackupDeletionIfAny(conn, sysTable);
repairFailedBackupMergeIfAny(conn, sysTable);
return;
@ -694,10 +690,9 @@ public final class BackupCommands {
// set overall backup status: failed
backupInfo.setState(BackupState.FAILED);
// compose the backup failed data
String backupFailedData =
"BackupId=" + backupInfo.getBackupId() + ",startts=" + backupInfo.getStartTs()
+ ",failedts=" + backupInfo.getCompleteTs() + ",failedphase="
+ backupInfo.getPhase() + ",failedmessage=" + backupInfo.getFailedMsg();
String backupFailedData = "BackupId=" + backupInfo.getBackupId() + ",startts="
+ backupInfo.getStartTs() + ",failedts=" + backupInfo.getCompleteTs() + ",failedphase="
+ backupInfo.getPhase() + ",failedmessage=" + backupInfo.getFailedMsg();
System.out.println(backupFailedData);
TableBackupClient.cleanupAndRestoreBackupSystem(conn, backupInfo, conf);
// If backup session is updated to FAILED state - means we
@ -709,7 +704,7 @@ public final class BackupCommands {
}
private void repairFailedBackupDeletionIfAny(Connection conn, BackupSystemTable sysTable)
throws IOException {
throws IOException {
String[] backupIds = sysTable.getListOfBackupIdsFromDeleteOperation();
if (backupIds == null || backupIds.length == 0) {
System.out.println("No failed backup DELETE operation found");
@ -730,7 +725,7 @@ public final class BackupCommands {
}
public static void repairFailedBackupMergeIfAny(Connection conn, BackupSystemTable sysTable)
throws IOException {
throws IOException {
String[] backupIds = sysTable.getListOfBackupIdsFromMergeOperation();
if (backupIds == null || backupIds.length == 0) {
@ -754,9 +749,11 @@ public final class BackupCommands {
}
boolean res = fs.rename(tmpPath, destPath);
if (!res) {
throw new IOException("MERGE repair: failed to rename from "+ tmpPath+" to "+ destPath);
throw new IOException(
"MERGE repair: failed to rename from " + tmpPath + " to " + destPath);
}
System.out.println("MERGE repair: renamed from "+ tmpPath+" to "+ destPath+" res="+ res);
System.out
.println("MERGE repair: renamed from " + tmpPath + " to " + destPath + " res=" + res);
} else {
checkRemoveBackupImages(fs, backupRoot, backupIds);
}
@ -773,16 +770,16 @@ public final class BackupCommands {
private static void checkRemoveBackupImages(FileSystem fs, String backupRoot,
String[] backupIds) throws IOException {
String mergedBackupId = BackupUtils.findMostRecentBackupId(backupIds);
for (String backupId: backupIds) {
for (String backupId : backupIds) {
if (backupId.equals(mergedBackupId)) {
continue;
}
Path path = HBackupFileSystem.getBackupPath(backupRoot, backupId);
if (fs.exists(path)) {
if (!fs.delete(path, true)) {
System.out.println("MERGE repair removing: "+ path +" - FAILED");
System.out.println("MERGE repair removing: " + path + " - FAILED");
} else {
System.out.println("MERGE repair removing: "+ path +" - OK");
System.out.println("MERGE repair removing: " + path + " - OK");
}
}
}
@ -816,23 +813,23 @@ public final class BackupCommands {
String[] args = cmdline == null ? null : cmdline.getArgs();
if (args == null || (args.length != 2)) {
System.err.println("ERROR: wrong number of arguments: "
+ (args == null ? null : args.length));
System.err
.println("ERROR: wrong number of arguments: " + (args == null ? null : args.length));
printUsage();
throw new IOException(INCORRECT_USAGE);
}
String[] backupIds = args[1].split(",");
if (backupIds.length < 2) {
String msg = "ERROR: can not merge a single backup image. "+
"Number of images must be greater than 1.";
String msg = "ERROR: can not merge a single backup image. "
+ "Number of images must be greater than 1.";
System.err.println(msg);
throw new IOException(msg);
}
Configuration conf = getConf() != null ? getConf() : HBaseConfiguration.create();
try (final Connection conn = ConnectionFactory.createConnection(conf);
final BackupAdminImpl admin = new BackupAdminImpl(conn)) {
final BackupAdminImpl admin = new BackupAdminImpl(conn)) {
admin.mergeBackups(backupIds);
}
}
@ -889,7 +886,7 @@ public final class BackupCommands {
} else {
// load from backup FS
history =
BackupUtils.getHistory(getConf(), n, backupRootPath, tableNameFilter, tableSetFilter);
BackupUtils.getHistory(getConf(), n, backupRootPath, tableNameFilter, tableSetFilter);
}
for (BackupInfo info : history) {
System.out.println(info.getShortDescription());

View File

@ -1,4 +1,4 @@
/**
/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
@ -15,7 +15,6 @@
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.hadoop.hbase.backup.impl;
import org.apache.hadoop.hbase.HBaseIOException;
@ -48,7 +47,7 @@ public class BackupException extends HBaseIOException {
/**
* Exception for the given backup that has no previous root cause
* @param msg reason why the backup failed
* @param msg reason why the backup failed
* @param desc description of the backup that is being failed
*/
public BackupException(String msg, BackupInfo desc) {
@ -58,9 +57,9 @@ public class BackupException extends HBaseIOException {
/**
* Exception for the given backup due to another exception
* @param msg reason why the backup failed
* @param msg reason why the backup failed
* @param cause root cause of the failure
* @param desc description of the backup that is being failed
* @param desc description of the backup that is being failed
*/
public BackupException(String msg, Throwable cause, BackupInfo desc) {
super(msg, cause);
@ -68,10 +67,9 @@ public class BackupException extends HBaseIOException {
}
/**
* Exception when the description of the backup cannot be determined, due to some other root
* cause
* Exception when the description of the backup cannot be determined, due to some other root cause
* @param message description of what caused the failure
* @param e root cause
* @param e root cause
*/
public BackupException(String message, Exception e) {
super(message, e);

View File

@ -1,5 +1,4 @@
/**
*
/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
@ -60,7 +59,7 @@ import org.slf4j.LoggerFactory;
public class BackupManager implements Closeable {
// in seconds
public final static String BACKUP_EXCLUSIVE_OPERATION_TIMEOUT_SECONDS_KEY =
"hbase.backup.exclusive.op.timeout.seconds";
"hbase.backup.exclusive.op.timeout.seconds";
// In seconds
private final static int DEFAULT_BACKUP_EXCLUSIVE_OPERATION_TIMEOUT = 3600;
private static final Logger LOG = LoggerFactory.getLogger(BackupManager.class);
@ -77,10 +76,12 @@ public class BackupManager implements Closeable {
* @throws IOException exception
*/
public BackupManager(Connection conn, Configuration conf) throws IOException {
if (!conf.getBoolean(BackupRestoreConstants.BACKUP_ENABLE_KEY,
BackupRestoreConstants.BACKUP_ENABLE_DEFAULT)) {
if (
!conf.getBoolean(BackupRestoreConstants.BACKUP_ENABLE_KEY,
BackupRestoreConstants.BACKUP_ENABLE_DEFAULT)
) {
throw new BackupException("HBase backup is not enabled. Check your "
+ BackupRestoreConstants.BACKUP_ENABLE_KEY + " setting.");
+ BackupRestoreConstants.BACKUP_ENABLE_KEY + " setting.");
}
this.conf = conf;
this.conn = conn;
@ -120,12 +121,13 @@ public class BackupManager implements Closeable {
}
plugins = conf.get(HFileCleaner.MASTER_HFILE_CLEANER_PLUGINS);
conf.set(HFileCleaner.MASTER_HFILE_CLEANER_PLUGINS, (plugins == null ? "" : plugins + ",") +
BackupHFileCleaner.class.getName());
conf.set(HFileCleaner.MASTER_HFILE_CLEANER_PLUGINS,
(plugins == null ? "" : plugins + ",") + BackupHFileCleaner.class.getName());
if (LOG.isDebugEnabled()) {
LOG.debug("Added log cleaner: {}. Added master procedure manager: {}."
+"Added master procedure manager: {}", cleanerClass, masterProcedureClass,
BackupHFileCleaner.class.getName());
LOG.debug(
"Added log cleaner: {}. Added master procedure manager: {}."
+ "Added master procedure manager: {}",
cleanerClass, masterProcedureClass, BackupHFileCleaner.class.getName());
}
}
@ -163,8 +165,7 @@ public class BackupManager implements Closeable {
}
/**
* Get configuration
* @return configuration
* Get configuration n
*/
Configuration getConf() {
return conf;
@ -186,17 +187,15 @@ public class BackupManager implements Closeable {
/**
* Creates a backup info based on input backup request.
* @param backupId backup id
* @param type type
* @param tableList table list
* @param backupId backup id
* @param type type
* @param tableList table list
* @param targetRootDir root dir
* @param workers number of parallel workers
* @param bandwidth bandwidth per worker in MB per sec
* @return BackupInfo
* @throws BackupException exception
* @param workers number of parallel workers
* @param bandwidth bandwidth per worker in MB per sec n * @throws BackupException exception
*/
public BackupInfo createBackupInfo(String backupId, BackupType type, List<TableName> tableList,
String targetRootDir, int workers, long bandwidth) throws BackupException {
String targetRootDir, int workers, long bandwidth) throws BackupException {
if (targetRootDir == null) {
throw new BackupException("Wrong backup request parameter: target backup root directory");
}
@ -292,8 +291,8 @@ public class BackupManager implements Closeable {
BackupImage.Builder builder = BackupImage.newBuilder();
BackupImage image = builder.withBackupId(backup.getBackupId()).withType(backup.getType())
.withRootDir(backup.getBackupRootDir()).withTableList(backup.getTableNames())
.withStartTime(backup.getStartTs()).withCompleteTime(backup.getCompleteTs()).build();
.withRootDir(backup.getBackupRootDir()).withTableList(backup.getTableNames())
.withStartTime(backup.getStartTs()).withCompleteTime(backup.getCompleteTs()).build();
// Only direct ancestors for a backup are required and not entire history of backup for this
// table resulting in verifying all of the previous backups which is unnecessary and backup
@ -320,21 +319,21 @@ public class BackupManager implements Closeable {
if (BackupManifest.canCoverImage(ancestors, image)) {
LOG.debug("Met the backup boundary of the current table set:");
for (BackupImage image1 : ancestors) {
LOG.debug(" BackupID={}, BackupDir={}", image1.getBackupId(), image1.getRootDir());
LOG.debug(" BackupID={}, BackupDir={}", image1.getBackupId(), image1.getRootDir());
}
} else {
Path logBackupPath =
HBackupFileSystem.getBackupPath(backup.getBackupRootDir(), backup.getBackupId());
LOG.debug("Current backup has an incremental backup ancestor, "
+ "touching its image manifest in {}"
+ " to construct the dependency.", logBackupPath.toString());
HBackupFileSystem.getBackupPath(backup.getBackupRootDir(), backup.getBackupId());
LOG.debug(
"Current backup has an incremental backup ancestor, "
+ "touching its image manifest in {}" + " to construct the dependency.",
logBackupPath.toString());
BackupManifest lastIncrImgManifest = new BackupManifest(conf, logBackupPath);
BackupImage lastIncrImage = lastIncrImgManifest.getBackupImage();
ancestors.add(lastIncrImage);
LOG.debug(
"Last dependent incremental backup image: {BackupID={}" +
"BackupDir={}}", lastIncrImage.getBackupId(), lastIncrImage.getRootDir());
LOG.debug("Last dependent incremental backup image: {BackupID={}" + "BackupDir={}}",
lastIncrImage.getBackupId(), lastIncrImage.getRootDir());
}
}
}
@ -345,12 +344,12 @@ public class BackupManager implements Closeable {
/**
* Get the direct ancestors of this backup for one table involved.
* @param backupInfo backup info
* @param table table
* @param table table
* @return backupImages on the dependency list
* @throws IOException exception
*/
public ArrayList<BackupImage> getAncestors(BackupInfo backupInfo, TableName table)
throws IOException {
throws IOException {
ArrayList<BackupImage> ancestors = getAncestors(backupInfo);
ArrayList<BackupImage> tableAncestors = new ArrayList<>();
for (BackupImage image : ancestors) {
@ -399,11 +398,13 @@ public class BackupManager implements Closeable {
// Restore the interrupted status
Thread.currentThread().interrupt();
}
if (lastWarningOutputTime == 0
|| (EnvironmentEdgeManager.currentTime() - lastWarningOutputTime) > 60000) {
if (
lastWarningOutputTime == 0
|| (EnvironmentEdgeManager.currentTime() - lastWarningOutputTime) > 60000
) {
lastWarningOutputTime = EnvironmentEdgeManager.currentTime();
LOG.warn("Waiting to acquire backup exclusive lock for {}s",
+(lastWarningOutputTime - startTime) / 1000);
+(lastWarningOutputTime - startTime) / 1000);
}
} else {
throw e;
@ -480,8 +481,8 @@ public class BackupManager implements Closeable {
* @param tables tables
* @throws IOException exception
*/
public void writeRegionServerLogTimestamp(Set<TableName> tables,
Map<String, Long> newTimestamps) throws IOException {
public void writeRegionServerLogTimestamp(Set<TableName> tables, Map<String, Long> newTimestamps)
throws IOException {
systemTable.writeRegionServerLogTimestamp(tables, newTimestamps, backupInfo.getBackupRootDir());
}

View File

@ -1,4 +1,4 @@
/**
/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
@ -15,7 +15,6 @@
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.hadoop.hbase.backup.impl;
import java.io.IOException;
@ -26,7 +25,6 @@ import java.util.List;
import java.util.Map;
import java.util.Map.Entry;
import java.util.TreeMap;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FSDataInputStream;
import org.apache.hadoop.fs.FSDataOutputStream;
@ -50,9 +48,8 @@ import org.apache.hadoop.hbase.shaded.protobuf.generated.HBaseProtos;
/**
* Backup manifest contains all the meta data of a backup image. The manifest info will be bundled
* as manifest file together with data. So that each backup image will contain all the info needed
* for restore. BackupManifest is a storage container for BackupImage.
* It is responsible for storing/reading backup image data and has some additional utility methods.
*
* for restore. BackupManifest is a storage container for BackupImage. It is responsible for
* storing/reading backup image data and has some additional utility methods.
*/
@InterfaceAudience.Private
public class BackupManifest {
@ -126,8 +123,8 @@ public class BackupManifest {
super();
}
private BackupImage(String backupId, BackupType type, String rootDir,
List<TableName> tableList, long startTs, long completeTs) {
private BackupImage(String backupId, BackupType type, String rootDir, List<TableName> tableList,
long startTs, long completeTs) {
this.backupId = backupId;
this.type = type;
this.rootDir = rootDir;
@ -149,9 +146,9 @@ public class BackupManifest {
List<BackupProtos.BackupImage> ancestorList = im.getAncestorsList();
BackupType type =
im.getBackupType() == BackupProtos.BackupType.FULL ? BackupType.FULL
: BackupType.INCREMENTAL;
BackupType type = im.getBackupType() == BackupProtos.BackupType.FULL
? BackupType.FULL
: BackupType.INCREMENTAL;
BackupImage image = new BackupImage(backupId, type, rootDir, tableList, startTs, completeTs);
for (BackupProtos.BackupImage img : ancestorList) {
@ -187,8 +184,8 @@ public class BackupManifest {
return builder.build();
}
private static Map<TableName, Map<String, Long>> loadIncrementalTimestampMap(
BackupProtos.BackupImage proto) {
private static Map<TableName, Map<String, Long>>
loadIncrementalTimestampMap(BackupProtos.BackupImage proto) {
List<BackupProtos.TableServerTimestamp> list = proto.getTstMapList();
Map<TableName, Map<String, Long>> incrTimeRanges = new HashMap<>();
@ -221,13 +218,13 @@ public class BackupManifest {
TableName key = entry.getKey();
Map<String, Long> value = entry.getValue();
BackupProtos.TableServerTimestamp.Builder tstBuilder =
BackupProtos.TableServerTimestamp.newBuilder();
BackupProtos.TableServerTimestamp.newBuilder();
tstBuilder.setTableName(ProtobufUtil.toProtoTableName(key));
for (Map.Entry<String, Long> entry2 : value.entrySet()) {
String s = entry2.getKey();
BackupProtos.ServerTimestamp.Builder stBuilder =
BackupProtos.ServerTimestamp.newBuilder();
BackupProtos.ServerTimestamp.newBuilder();
HBaseProtos.ServerName.Builder snBuilder = HBaseProtos.ServerName.newBuilder();
ServerName sn = ServerName.parseServerName(s);
snBuilder.setHostName(sn.getHostname());
@ -378,10 +375,9 @@ public class BackupManifest {
*/
public BackupManifest(BackupInfo backup) {
BackupImage.Builder builder = BackupImage.newBuilder();
this.backupImage =
builder.withBackupId(backup.getBackupId()).withType(backup.getType())
.withRootDir(backup.getBackupRootDir()).withTableList(backup.getTableNames())
.withStartTime(backup.getStartTs()).withCompleteTime(backup.getCompleteTs()).build();
this.backupImage = builder.withBackupId(backup.getBackupId()).withType(backup.getType())
.withRootDir(backup.getBackupRootDir()).withTableList(backup.getTableNames())
.withStartTime(backup.getStartTs()).withCompleteTime(backup.getCompleteTs()).build();
}
/**
@ -393,16 +389,14 @@ public class BackupManifest {
List<TableName> tables = new ArrayList<TableName>();
tables.add(table);
BackupImage.Builder builder = BackupImage.newBuilder();
this.backupImage =
builder.withBackupId(backup.getBackupId()).withType(backup.getType())
.withRootDir(backup.getBackupRootDir()).withTableList(tables)
.withStartTime(backup.getStartTs()).withCompleteTime(backup.getCompleteTs()).build();
this.backupImage = builder.withBackupId(backup.getBackupId()).withType(backup.getType())
.withRootDir(backup.getBackupRootDir()).withTableList(tables)
.withStartTime(backup.getStartTs()).withCompleteTime(backup.getCompleteTs()).build();
}
/**
* Construct manifest from a backup directory.
*
* @param conf configuration
* @param conf configuration
* @param backupPath backup path
* @throws IOException if constructing the manifest from the backup directory fails
*/
@ -412,7 +406,7 @@ public class BackupManifest {
/**
* Construct manifest from a backup directory.
* @param fs the FileSystem
* @param fs the FileSystem
* @param backupPath backup path
* @throws BackupException exception
*/
@ -449,7 +443,7 @@ public class BackupManifest {
}
this.backupImage = BackupImage.fromProto(proto);
LOG.debug("Loaded manifest instance from manifest file: "
+ BackupUtils.getPath(subFile.getPath()));
+ BackupUtils.getPath(subFile.getPath()));
return;
}
}
@ -480,10 +474,10 @@ public class BackupManifest {
byte[] data = backupImage.toProto().toByteArray();
// write the file, overwrite if already exist
Path manifestFilePath =
new Path(HBackupFileSystem.getBackupPath(backupImage.getRootDir(),
backupImage.getBackupId()), MANIFEST_FILE_NAME);
new Path(HBackupFileSystem.getBackupPath(backupImage.getRootDir(), backupImage.getBackupId()),
MANIFEST_FILE_NAME);
try (FSDataOutputStream out =
manifestFilePath.getFileSystem(conf).create(manifestFilePath, true)) {
manifestFilePath.getFileSystem(conf).create(manifestFilePath, true)) {
out.write(data);
} catch (IOException e) {
throw new BackupException(e.getMessage());
@ -531,8 +525,8 @@ public class BackupManifest {
for (BackupImage image : backupImage.getAncestors()) {
restoreImages.put(Long.valueOf(image.startTs), image);
}
return new ArrayList<>(reverse ? (restoreImages.descendingMap().values())
: (restoreImages.values()));
return new ArrayList<>(
reverse ? (restoreImages.descendingMap().values()) : (restoreImages.values()));
}
/**
@ -614,7 +608,7 @@ public class BackupManifest {
/**
* Check whether backup image set could cover a backup image or not.
* @param fullImages The backup image set
* @param image The target backup image
* @param image The target backup image
* @return true if fullImages can cover image, otherwise false
*/
public static boolean canCoverImage(ArrayList<BackupImage> fullImages, BackupImage image) {
@ -664,8 +658,8 @@ public class BackupManifest {
info.setStartTs(backupImage.getStartTs());
info.setBackupRootDir(backupImage.getRootDir());
if (backupImage.getType() == BackupType.INCREMENTAL) {
info.setHLogTargetDir(BackupUtils.getLogBackupDir(backupImage.getRootDir(),
backupImage.getBackupId()));
info.setHLogTargetDir(
BackupUtils.getLogBackupDir(backupImage.getRootDir(), backupImage.getBackupId()));
}
return info;
}

View File

@ -1,5 +1,4 @@
/**
*
/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
@ -69,6 +68,7 @@ import org.apache.hadoop.hbase.util.Pair;
import org.apache.yetus.audience.InterfaceAudience;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.apache.hadoop.hbase.shaded.protobuf.generated.BackupProtos;
import org.apache.hadoop.hbase.shaded.protobuf.generated.HBaseProtos;
@ -232,7 +232,7 @@ public final class BackupSystemTable implements Closeable {
long TIMEOUT = 60000;
long startTime = EnvironmentEdgeManager.currentTime();
LOG.debug("Backup table {} is not present and available, waiting for it to become so",
tableName);
tableName);
while (!admin.tableExists(tableName) || !admin.isTableAvailable(tableName)) {
try {
Thread.sleep(100);
@ -274,15 +274,17 @@ public final class BackupSystemTable implements Closeable {
Map<byte[], String> readBulkLoadedFiles(String backupId) throws IOException {
Scan scan = BackupSystemTable.createScanForBulkLoadedFiles(backupId);
try (Table table = connection.getTable(bulkLoadTableName);
ResultScanner scanner = table.getScanner(scan)) {
ResultScanner scanner = table.getScanner(scan)) {
Result res = null;
Map<byte[], String> map = new TreeMap<>(Bytes.BYTES_COMPARATOR);
while ((res = scanner.next()) != null) {
res.advance();
byte[] row = CellUtil.cloneRow(res.listCells().get(0));
for (Cell cell : res.listCells()) {
if (CellUtil.compareQualifiers(cell, BackupSystemTable.PATH_COL, 0,
BackupSystemTable.PATH_COL.length) == 0) {
if (
CellUtil.compareQualifiers(cell, BackupSystemTable.PATH_COL, 0,
BackupSystemTable.PATH_COL.length) == 0
) {
map.put(row, Bytes.toString(CellUtil.cloneValue(cell)));
}
}
@ -298,11 +300,11 @@ public final class BackupSystemTable implements Closeable {
* @return array of Map of family to List of Paths
*/
public Map<byte[], List<Path>>[] readBulkLoadedFiles(String backupId, List<TableName> sTableList)
throws IOException {
throws IOException {
Scan scan = BackupSystemTable.createScanForBulkLoadedFiles(backupId);
Map<byte[], List<Path>>[] mapForSrc = new Map[sTableList == null ? 1 : sTableList.size()];
try (Table table = connection.getTable(bulkLoadTableName);
ResultScanner scanner = table.getScanner(scan)) {
ResultScanner scanner = table.getScanner(scan)) {
Result res = null;
while ((res = scanner.next()) != null) {
res.advance();
@ -310,14 +312,20 @@ public final class BackupSystemTable implements Closeable {
byte[] fam = null;
String path = null;
for (Cell cell : res.listCells()) {
if (CellUtil.compareQualifiers(cell, BackupSystemTable.TBL_COL, 0,
BackupSystemTable.TBL_COL.length) == 0) {
if (
CellUtil.compareQualifiers(cell, BackupSystemTable.TBL_COL, 0,
BackupSystemTable.TBL_COL.length) == 0
) {
tbl = TableName.valueOf(CellUtil.cloneValue(cell));
} else if (CellUtil.compareQualifiers(cell, BackupSystemTable.FAM_COL, 0,
BackupSystemTable.FAM_COL.length) == 0) {
} else if (
CellUtil.compareQualifiers(cell, BackupSystemTable.FAM_COL, 0,
BackupSystemTable.FAM_COL.length) == 0
) {
fam = CellUtil.cloneValue(cell);
} else if (CellUtil.compareQualifiers(cell, BackupSystemTable.PATH_COL, 0,
BackupSystemTable.PATH_COL.length) == 0) {
} else if (
CellUtil.compareQualifiers(cell, BackupSystemTable.PATH_COL, 0,
BackupSystemTable.PATH_COL.length) == 0
) {
path = Bytes.toString(CellUtil.cloneValue(cell));
}
}
@ -368,7 +376,7 @@ public final class BackupSystemTable implements Closeable {
* @param finalPaths family and associated hfiles
*/
public void writePathsPostBulkLoad(TableName tabName, byte[] region,
Map<byte[], List<Path>> finalPaths) throws IOException {
Map<byte[], List<Path>> finalPaths) throws IOException {
if (LOG.isDebugEnabled()) {
LOG.debug("write bulk load descriptor to backup " + tabName + " with " + finalPaths.size()
+ " entries");
@ -388,14 +396,14 @@ public final class BackupSystemTable implements Closeable {
* @param pairs list of paths for hfiles
*/
public void writeFilesForBulkLoadPreCommit(TableName tabName, byte[] region, final byte[] family,
final List<Pair<Path, Path>> pairs) throws IOException {
final List<Pair<Path, Path>> pairs) throws IOException {
if (LOG.isDebugEnabled()) {
LOG.debug(
"write bulk load descriptor to backup " + tabName + " with " + pairs.size() + " entries");
}
try (Table table = connection.getTable(bulkLoadTableName)) {
List<Put> puts =
BackupSystemTable.createPutForPreparedBulkload(tabName, region, family, pairs);
BackupSystemTable.createPutForPreparedBulkload(tabName, region, family, pairs);
table.put(puts);
LOG.debug("written " + puts.size() + " rows for bulk load of " + tabName);
}
@ -434,7 +442,7 @@ public final class BackupSystemTable implements Closeable {
Scan scan = BackupSystemTable.createScanForOrigBulkLoadedFiles(tTable);
Map<String, Map<String, List<Pair<String, Boolean>>>> tblMap = map.get(tTable);
try (Table table = connection.getTable(bulkLoadTableName);
ResultScanner scanner = table.getScanner(scan)) {
ResultScanner scanner = table.getScanner(scan)) {
Result res = null;
while ((res = scanner.next()) != null) {
res.advance();
@ -448,14 +456,20 @@ public final class BackupSystemTable implements Closeable {
rows.add(row);
String rowStr = Bytes.toString(row);
region = BackupSystemTable.getRegionNameFromOrigBulkLoadRow(rowStr);
if (CellUtil.compareQualifiers(cell, BackupSystemTable.FAM_COL, 0,
BackupSystemTable.FAM_COL.length) == 0) {
if (
CellUtil.compareQualifiers(cell, BackupSystemTable.FAM_COL, 0,
BackupSystemTable.FAM_COL.length) == 0
) {
fam = Bytes.toString(CellUtil.cloneValue(cell));
} else if (CellUtil.compareQualifiers(cell, BackupSystemTable.PATH_COL, 0,
BackupSystemTable.PATH_COL.length) == 0) {
} else if (
CellUtil.compareQualifiers(cell, BackupSystemTable.PATH_COL, 0,
BackupSystemTable.PATH_COL.length) == 0
) {
path = Bytes.toString(CellUtil.cloneValue(cell));
} else if (CellUtil.compareQualifiers(cell, BackupSystemTable.STATE_COL, 0,
BackupSystemTable.STATE_COL.length) == 0) {
} else if (
CellUtil.compareQualifiers(cell, BackupSystemTable.STATE_COL, 0,
BackupSystemTable.STATE_COL.length) == 0
) {
byte[] state = CellUtil.cloneValue(cell);
if (Bytes.equals(BackupSystemTable.BL_PREPARE, state)) {
raw = true;
@ -489,7 +503,7 @@ public final class BackupSystemTable implements Closeable {
* @param backupId the backup Id
*/
public void writeBulkLoadedFiles(List<TableName> sTableList, Map<byte[], List<Path>>[] maps,
String backupId) throws IOException {
String backupId) throws IOException {
try (Table table = connection.getTable(bulkLoadTableName)) {
long ts = EnvironmentEdgeManager.currentTime();
int cnt = 0;
@ -566,7 +580,7 @@ public final class BackupSystemTable implements Closeable {
/**
* Write the start code (timestamp) to backup system table. If passed in null, then write 0 byte.
* @param startCode start code
* @param startCode start code
* @param backupRoot root directory path to backup
* @throws IOException exception
*/
@ -583,7 +597,7 @@ public final class BackupSystemTable implements Closeable {
/**
* Exclusive operations are: create, delete, merge
* @throws IOException if a table operation fails or an active backup exclusive operation is
* already underway
* already underway
*/
public void startBackupExclusiveOperation() throws IOException {
LOG.debug("Start new backup exclusive operation");
@ -591,11 +605,15 @@ public final class BackupSystemTable implements Closeable {
try (Table table = connection.getTable(tableName)) {
Put put = createPutForStartBackupSession();
// First try to put if row does not exist
if (!table.checkAndMutate(ACTIVE_SESSION_ROW, SESSIONS_FAMILY).qualifier(ACTIVE_SESSION_COL)
.ifNotExists().thenPut(put)) {
if (
!table.checkAndMutate(ACTIVE_SESSION_ROW, SESSIONS_FAMILY).qualifier(ACTIVE_SESSION_COL)
.ifNotExists().thenPut(put)
) {
// Row exists, try to put if value == ACTIVE_SESSION_NO
if (!table.checkAndMutate(ACTIVE_SESSION_ROW, SESSIONS_FAMILY).qualifier(ACTIVE_SESSION_COL)
.ifEquals(ACTIVE_SESSION_NO).thenPut(put)) {
if (
!table.checkAndMutate(ACTIVE_SESSION_ROW, SESSIONS_FAMILY).qualifier(ACTIVE_SESSION_COL)
.ifEquals(ACTIVE_SESSION_NO).thenPut(put)
) {
throw new ExclusiveOperationException();
}
}
@ -613,8 +631,10 @@ public final class BackupSystemTable implements Closeable {
try (Table table = connection.getTable(tableName)) {
Put put = createPutForStopBackupSession();
if (!table.checkAndMutate(ACTIVE_SESSION_ROW, SESSIONS_FAMILY).qualifier(ACTIVE_SESSION_COL)
.ifEquals(ACTIVE_SESSION_YES).thenPut(put)) {
if (
!table.checkAndMutate(ACTIVE_SESSION_ROW, SESSIONS_FAMILY).qualifier(ACTIVE_SESSION_COL)
.ifEquals(ACTIVE_SESSION_YES).thenPut(put)
) {
throw new IOException("There is no active backup exclusive operation");
}
}
@ -633,13 +653,13 @@ public final class BackupSystemTable implements Closeable {
* @throws IOException exception
*/
public HashMap<String, Long> readRegionServerLastLogRollResult(String backupRoot)
throws IOException {
throws IOException {
LOG.trace("read region server last roll log result to backup system table");
Scan scan = createScanForReadRegionServerLastLogRollResult(backupRoot);
try (Table table = connection.getTable(tableName);
ResultScanner scanner = table.getScanner(scan)) {
ResultScanner scanner = table.getScanner(scan)) {
Result res;
HashMap<String, Long> rsTimestampMap = new HashMap<>();
while ((res = scanner.next()) != null) {
@ -656,13 +676,13 @@ public final class BackupSystemTable implements Closeable {
/**
* Writes Region Server last roll log result (timestamp) to backup system table table
* @param server Region Server name
* @param ts last log timestamp
* @param server Region Server name
* @param ts last log timestamp
* @param backupRoot root directory path to backup
* @throws IOException exception
*/
public void writeRegionServerLastLogRollResult(String server, Long ts, String backupRoot)
throws IOException {
throws IOException {
LOG.trace("write region server last roll log result to backup system table");
try (Table table = connection.getTable(tableName)) {
@ -710,7 +730,7 @@ public final class BackupSystemTable implements Closeable {
/**
* Get backup history records filtered by list of filters.
* @param n max number of records, if n == -1 , then max number is ignored
* @param n max number of records, if n == -1 , then max number is ignored
* @param filters list of filters
* @return backup records
* @throws IOException if getting the backup history fails
@ -793,7 +813,7 @@ public final class BackupSystemTable implements Closeable {
}
public Map<TableName, ArrayList<BackupInfo>> getBackupHistoryForTableSet(Set<TableName> set,
String backupRoot) throws IOException {
String backupRoot) throws IOException {
List<BackupInfo> history = getBackupHistory(backupRoot);
Map<TableName, ArrayList<BackupInfo>> tableHistoryMap = new HashMap<>();
for (Iterator<BackupInfo> iterator = history.iterator(); iterator.hasNext();) {
@ -829,7 +849,7 @@ public final class BackupSystemTable implements Closeable {
ArrayList<BackupInfo> list = new ArrayList<>();
try (Table table = connection.getTable(tableName);
ResultScanner scanner = table.getScanner(scan)) {
ResultScanner scanner = table.getScanner(scan)) {
Result res;
while ((res = scanner.next()) != null) {
res.advance();
@ -847,16 +867,16 @@ public final class BackupSystemTable implements Closeable {
* Write the current timestamps for each regionserver to backup system table after a successful
* full or incremental backup. The saved timestamp is of the last log file that was backed up
* already.
* @param tables tables
* @param tables tables
* @param newTimestamps timestamps
* @param backupRoot root directory path to backup
* @param backupRoot root directory path to backup
* @throws IOException exception
*/
public void writeRegionServerLogTimestamp(Set<TableName> tables,
Map<String, Long> newTimestamps, String backupRoot) throws IOException {
public void writeRegionServerLogTimestamp(Set<TableName> tables, Map<String, Long> newTimestamps,
String backupRoot) throws IOException {
if (LOG.isTraceEnabled()) {
LOG.trace("write RS log time stamps to backup system table for tables ["
+ StringUtils.join(tables, ",") + "]");
+ StringUtils.join(tables, ",") + "]");
}
List<Put> puts = new ArrayList<>();
for (TableName table : tables) {
@ -879,7 +899,7 @@ public final class BackupSystemTable implements Closeable {
* @throws IOException exception
*/
public Map<TableName, Map<String, Long>> readLogTimestampMap(String backupRoot)
throws IOException {
throws IOException {
if (LOG.isTraceEnabled()) {
LOG.trace("read RS log ts from backup system table for root=" + backupRoot);
}
@ -888,7 +908,7 @@ public final class BackupSystemTable implements Closeable {
Scan scan = createScanForReadLogTimestampMap(backupRoot);
try (Table table = connection.getTable(tableName);
ResultScanner scanner = table.getScanner(scan)) {
ResultScanner scanner = table.getScanner(scan)) {
Result res;
while ((res = scanner.next()) != null) {
res.advance();
@ -899,11 +919,11 @@ public final class BackupSystemTable implements Closeable {
byte[] data = CellUtil.cloneValue(cell);
if (data == null) {
throw new IOException("Data of last backup data from backup system table "
+ "is empty. Create a backup first.");
+ "is empty. Create a backup first.");
}
if (data != null && data.length > 0) {
HashMap<String, Long> lastBackup =
fromTableServerTimestampProto(BackupProtos.TableServerTimestamp.parseFrom(data));
fromTableServerTimestampProto(BackupProtos.TableServerTimestamp.parseFrom(data));
tableTimestampMap.put(tn, lastBackup);
}
}
@ -912,11 +932,11 @@ public final class BackupSystemTable implements Closeable {
}
private BackupProtos.TableServerTimestamp toTableServerTimestampProto(TableName table,
Map<String, Long> map) {
Map<String, Long> map) {
BackupProtos.TableServerTimestamp.Builder tstBuilder =
BackupProtos.TableServerTimestamp.newBuilder();
BackupProtos.TableServerTimestamp.newBuilder();
tstBuilder
.setTableName(org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.toProtoTableName(table));
.setTableName(org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.toProtoTableName(table));
for (Entry<String, Long> entry : map.entrySet()) {
BackupProtos.ServerTimestamp.Builder builder = BackupProtos.ServerTimestamp.newBuilder();
@ -939,7 +959,7 @@ public final class BackupSystemTable implements Closeable {
List<BackupProtos.ServerTimestamp> list = proto.getServerTimestampList();
for (BackupProtos.ServerTimestamp st : list) {
ServerName sn =
org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.toServerName(st.getServerName());
org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.toServerName(st.getServerName());
map.put(sn.getHostname() + ":" + sn.getPort(), st.getTimestamp());
}
return map;
@ -973,12 +993,12 @@ public final class BackupSystemTable implements Closeable {
/**
* Add tables to global incremental backup set
* @param tables set of tables
* @param tables set of tables
* @param backupRoot root directory path to backup
* @throws IOException exception
*/
public void addIncrementalBackupTableSet(Set<TableName> tables, String backupRoot)
throws IOException {
throws IOException {
if (LOG.isTraceEnabled()) {
LOG.trace("Add incremental backup table set to backup system table. ROOT=" + backupRoot
+ " tables [" + StringUtils.join(tables, " ") + "]");
@ -1019,7 +1039,7 @@ public final class BackupSystemTable implements Closeable {
Scan scan = createScanForBackupHistory();
scan.setCaching(1);
try (Table table = connection.getTable(tableName);
ResultScanner scanner = table.getScanner(scan)) {
ResultScanner scanner = table.getScanner(scan)) {
if (scanner.next() != null) {
result = true;
}
@ -1073,13 +1093,13 @@ public final class BackupSystemTable implements Closeable {
res.advance();
String[] tables = cellValueToBackupSet(res.current());
return Arrays.asList(tables).stream().map(item -> TableName.valueOf(item))
.collect(Collectors.toList());
.collect(Collectors.toList());
}
}
/**
* Add backup set (list of tables)
* @param name set name
* @param name set name
* @param newTables list of tables, comma-separated
* @throws IOException if a table operation fails
*/
@ -1105,7 +1125,7 @@ public final class BackupSystemTable implements Closeable {
/**
* Remove tables from backup set (list of tables)
* @param name set name
* @param name set name
* @param toRemove list of tables
* @throws IOException if a table operation or deleting the backup set fails
*/
@ -1132,7 +1152,7 @@ public final class BackupSystemTable implements Closeable {
table.put(put);
} else if (disjoint.length == tables.length) {
LOG.warn("Backup set '" + name + "' does not contain tables ["
+ StringUtils.join(toRemove, " ") + "]");
+ StringUtils.join(toRemove, " ") + "]");
} else { // disjoint.length == 0 and tables.length >0
// Delete backup set
LOG.info("Backup set '" + name + "' is empty. Deleting.");
@ -1176,7 +1196,7 @@ public final class BackupSystemTable implements Closeable {
TableDescriptorBuilder builder = TableDescriptorBuilder.newBuilder(getTableName(conf));
ColumnFamilyDescriptorBuilder colBuilder =
ColumnFamilyDescriptorBuilder.newBuilder(SESSIONS_FAMILY);
ColumnFamilyDescriptorBuilder.newBuilder(SESSIONS_FAMILY);
colBuilder.setMaxVersions(1);
Configuration config = HBaseConfiguration.create();
@ -1213,10 +1233,10 @@ public final class BackupSystemTable implements Closeable {
*/
public static TableDescriptor getSystemTableForBulkLoadedDataDescriptor(Configuration conf) {
TableDescriptorBuilder builder =
TableDescriptorBuilder.newBuilder(getTableNameForBulkLoadedData(conf));
TableDescriptorBuilder.newBuilder(getTableNameForBulkLoadedData(conf));
ColumnFamilyDescriptorBuilder colBuilder =
ColumnFamilyDescriptorBuilder.newBuilder(SESSIONS_FAMILY);
ColumnFamilyDescriptorBuilder.newBuilder(SESSIONS_FAMILY);
colBuilder.setMaxVersions(1);
Configuration config = HBaseConfiguration.create();
int ttl = config.getInt(BackupRestoreConstants.BACKUP_SYSTEM_TTL_KEY,
@ -1375,11 +1395,11 @@ public final class BackupSystemTable implements Closeable {
/**
* Creates Put to write RS last roll log timestamp map
* @param table table
* @param smap map, containing RS:ts
* @param smap map, containing RS:ts
* @return put operation
*/
private Put createPutForWriteRegionServerLogTimestamp(TableName table, byte[] smap,
String backupRoot) {
String backupRoot) {
Put put = new Put(rowkey(TABLE_RS_LOG_MAP_PREFIX, backupRoot, NULL, table.getNameAsString()));
put.addColumn(BackupSystemTable.META_FAMILY, Bytes.toBytes("log-roll-map"), smap);
return put;
@ -1414,12 +1434,12 @@ public final class BackupSystemTable implements Closeable {
/**
* Creates Put to store RS last log result
* @param server server name
* @param server server name
* @param timestamp log roll result (timestamp)
* @return put operation
*/
private Put createPutForRegionServerLastLogRollResult(String server, Long timestamp,
String backupRoot) {
String backupRoot) {
Put put = new Put(rowkey(RS_LOG_TS_PREFIX, backupRoot, NULL, server));
put.addColumn(BackupSystemTable.META_FAMILY, Bytes.toBytes("rs-log-ts"),
Bytes.toBytes(timestamp));
@ -1458,7 +1478,7 @@ public final class BackupSystemTable implements Closeable {
* Creates Put's for bulk load resulting from running LoadIncrementalHFiles
*/
static List<Put> createPutForCommittedBulkload(TableName table, byte[] region,
Map<byte[], List<Path>> finalPaths) {
Map<byte[], List<Path>> finalPaths) {
List<Put> puts = new ArrayList<>();
for (Map.Entry<byte[], List<Path>> entry : finalPaths.entrySet()) {
for (Path path : entry.getValue()) {
@ -1472,8 +1492,8 @@ public final class BackupSystemTable implements Closeable {
put.addColumn(BackupSystemTable.META_FAMILY, PATH_COL, Bytes.toBytes(file));
put.addColumn(BackupSystemTable.META_FAMILY, STATE_COL, BL_COMMIT);
puts.add(put);
LOG.debug(
"writing done bulk path " + file + " for " + table + " " + Bytes.toString(region));
LOG
.debug("writing done bulk path " + file + " for " + table + " " + Bytes.toString(region));
}
}
return puts;
@ -1538,7 +1558,7 @@ public final class BackupSystemTable implements Closeable {
* Creates Put's for bulk load resulting from running LoadIncrementalHFiles
*/
static List<Put> createPutForPreparedBulkload(TableName table, byte[] region, final byte[] family,
final List<Pair<Path, Path>> pairs) {
final List<Pair<Path, Path>> pairs) {
List<Put> puts = new ArrayList<>(pairs.size());
for (Pair<Path, Path> pair : pairs) {
Path path = pair.getSecond();
@ -1740,8 +1760,8 @@ public final class BackupSystemTable implements Closeable {
*/
static Scan createScanForBulkLoadedFiles(String backupId) {
Scan scan = new Scan();
byte[] startRow = backupId == null ? BULK_LOAD_PREFIX_BYTES
: rowkey(BULK_LOAD_PREFIX, backupId + BLK_LD_DELIM);
byte[] startRow =
backupId == null ? BULK_LOAD_PREFIX_BYTES : rowkey(BULK_LOAD_PREFIX, backupId + BLK_LD_DELIM);
byte[] stopRow = Arrays.copyOf(startRow, startRow.length);
stopRow[stopRow.length - 1] = (byte) (stopRow[stopRow.length - 1] + 1);
scan.withStartRow(startRow);
@ -1752,7 +1772,7 @@ public final class BackupSystemTable implements Closeable {
}
static Put createPutForBulkLoadedFile(TableName tn, byte[] fam, String p, String backupId,
long ts, int idx) {
long ts, int idx) {
Put put = new Put(rowkey(BULK_LOAD_PREFIX, backupId + BLK_LD_DELIM + ts + BLK_LD_DELIM + idx));
put.addColumn(BackupSystemTable.META_FAMILY, TBL_COL, tn.getName());
put.addColumn(BackupSystemTable.META_FAMILY, FAM_COL, fam);
@ -1798,7 +1818,7 @@ public final class BackupSystemTable implements Closeable {
/**
* Creates Put operation to update backup set content
* @param name backup set's name
* @param name backup set's name
* @param tables list of tables
* @return put operation
*/

View File

@ -1,5 +1,4 @@
/**
*
/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
@ -19,7 +18,6 @@
package org.apache.hadoop.hbase.backup.impl;
import java.io.IOException;
import org.apache.yetus.audience.InterfaceAudience;
@InterfaceAudience.Private

Some files were not shown because too many files have changed in this diff Show More