Found an issue in the 532 commit. The MemcacheScanner had a flipped isWildcardScanner
test; fixing it returned wrong columns because using okCols rather than the literals
passed in.
git-svn-id: https://svn.apache.org/repos/asf/hadoop/hbase/trunk@648912 13f79535-47bb-0310-9956-ffa450edef68
-Removed HScannerInterface and HInternalScannerInterface
-Created new interfaces Scanner for clients and InternalScanner for internal consumers
-Internal and client scanners no longer share common interface
-Client scanner's next() method and iterables are in RowResults
-Updated tests and internal consumers to use Scanner in place of HScannerInterface
-HTable obtainScanner(*) are now renamed getScanner(*)
-Tests have ScannerIncommon to turn Scanners into InternalScanners for some tests
-Fixed a bug in HMaster that was eating TableExistsExceptions (unrelated)
-Updated TableInputFormat to provide RowResults instead of MapWritables
-Updated TableOutputFormat to take BatchUpdates instead of MapWritables
-Updated TableMap, TableReduce, and friends to correctly hook up to new input/output formats
HBASE-567 Reused BatchUpdate instances accumulate BatchOperations
- Fix to BatchUpdate that allows correct reuse of BatchUpdate instances (readFields didn't clear BatchOperation map)
- Update TestSerialization to prove above is fixed
git-svn-id: https://svn.apache.org/repos/asf/hadoop/hbase/trunk@646104 13f79535-47bb-0310-9956-ffa450edef68
-HConnectionManager#locateRegionInMeta no longer throws ISE, instead throws new RegionOfflineException
-Removed duplicated code for catching exceptions for retries
git-svn-id: https://svn.apache.org/repos/asf/hadoop/hbase/trunk@644854 13f79535-47bb-0310-9956-ffa450edef68
-Updated HRegionInterface, HRegionServer, HRegion, HStore to provide RowResults as the return of getRow methods
-Updated HTable to expect RowResult objects
-Updated ThriftServer to expect RowResults
-Cleaned up HConnectionManager's interaction with region servers
git-svn-id: https://svn.apache.org/repos/asf/hadoop/hbase/trunk@644828 13f79535-47bb-0310-9956-ffa450edef68
Removes startUpdate calls from all but a few places. TestBatchUpdate and TestMultipleUpdates both stay the same, but TMU will be removed when startUpdate is. Parts of TBU will also be whacked when we remove the deprecated methods. HTable still has its startUpdate methods.
Changed the Incommon interface to remove the startUpdate, put, delete, and commit methods, and made a new commit(BatchUpdate).
git-svn-id: https://svn.apache.org/repos/asf/hadoop/hbase/trunk@644811 13f79535-47bb-0310-9956-ffa450edef68
We now show hadoop and hbase versions in master. Also cleaned up how
the jsp is generated moving it into main build.xml done as a pre-compile
step.
D src/java/org/apache/hadoop/hbase/generated
Removed this directory. No longer check in jsps. Generate them every time.
M src/webapps/regionserver/regionserver.jsp
Use the hbase VersionInfo so we get hbase version, not hadoop's.
M src/webapps/master/master.jsp
Output hadoop and hbase versions.
D build-webapps.xml
Remove. Integrated into main build.xml. Reason we were doing this
distinct C --CLASSPATH issues when compiling in hadoop context --
no longer prevail.
M build.xml
(jspc): Added target and made it a prereq of compile.
git-svn-id: https://svn.apache.org/repos/asf/hadoop/hbase/trunk@644034 13f79535-47bb-0310-9956-ffa450edef68
HMerge, HRegionServer
- changes that reflect changes to HRegion, CompactSplitThread and Flusher methods
ServerManager
- Return zero length array to region server if it is exiting or quiesced and Master is not yet ready to shut down.
QueueEntry
- removed. no longer used.
CompactSplitThread
- make compactionQueue a queue of HRegion.
- Add Set<HRegion> so we can quickly determine if a region is in the queue. BlockingQueue.contains() does a linear scan of the queue.
- Add a lock and interruptPolitely methods so that compactions/splits in progress are not interrupted.
- Don't add a region to the queue if it is already present.
Flusher
- change queue from DelayQueue to BlockingQueue, with HRegion entries instead of QueueEntry.
- Add Set<HRegion> to quickly determine if a region is already in the queue to avoid linear scan of BlockingQueue.contains().
- Only put regions in the queue for optional cache flush if the last time they were flushed is older than now - optionalFlushInterval.
- Only add regions to the queue if it is not already present.
HRegion
- don't request a cache flush if one has already been requested.
- Add setLastFlushTime so flusher can set it once it has queued an optional flush.
- Replace largestHStore with getLargestHStoreSize: returns long instead of HStoreSize object.
- Add midKey as parameter to splitRegion.
- Reorder start of splitRegion so it doesn't do any work before validating parameters.
- Remove needsSplit and compactIfNeeded - no longer needed.
- compactStores now returns midKey if split is needed.
- snapshotMemcaches now sets flushRequested to false and sets lastFlushTime to now.
- update does not request a cache flush if one has already been requested.
- Override equals and hashCode so HRegions can be stored in a HashSet.
HStore
- loadHStoreFiles now computes max sequence id and the initial size of the store.
- Add getter for family.
- internalCacheFlush updates store size, and logs both size of cache flush and resulting map file size (with debug logging enabled).
- Remove needsCompaction and hasReferences - no longer needed.
- compact() returns midKey if store needs to be split.
- compact() does all checking before actually starting a compaction.
- If store size is greater than desiredMaxFileSize, compact returns the midKey for the store regardless of whether a compaction was actually done.
- Added more synchronization in completeCompaction while iterating over storeFiles.
- completeCompaction computes new store size.
- New method checkSplit replaces method size. Returns midKey if store needs to be split and can be split.
HStoreSize
- removed. No longer needed.
HBaseTestCase
- only set fs if it has not already been set by a subclass.
TestTableIndex, TestTableMapReduce
- call FSUtil.deleteFully to clean up cruft left in local fs, by MapReduce
git-svn-id: https://svn.apache.org/repos/asf/hadoop/hbase/trunk@643761 13f79535-47bb-0310-9956-ffa450edef68
M branches/0.1/src/java/org/apache/hadoop/hbase/util/MetaUtils.java
M trunk/src/java/org/apache/hadoop/hbase/util/MetaUtils.java
(changeOnlineStatus): Added.
git-svn-id: https://svn.apache.org/repos/asf/hadoop/hbase/trunk@643486 13f79535-47bb-0310-9956-ffa450edef68
server reports that it is processing the open request
This is patch reviewed with Jim but with the number of edits between
reports made into a configurable.
Have the HRegionServer pass down a Progressable implementation down into
Region and then down int Store where edits are replayed. Call progress
after every couple of thousand edits.
M src/java/org/apache/hadoop/hbase/HStore.java
Take a Progessable in the constructor. Call it when applying edits.
M src/java/org/apache/hadoop/hbase/HMaster.java
Update commment around MSG_REPORT_PROCESS_OPEN so its expected
that we can get more than one of these messages during a region open.
M src/java/org/apache/hadoop/hbase/HRegion.java
New constructor that takes a Progressable. Pass it to Stores on construction.
M src/java/org/apache/hadoop/hbase/HRegionServer.java
On open of a region, pass in a Progressable that adds a
MSG_REPORT_PROCESS_OPEN every time its called.
git-svn-id: https://svn.apache.org/repos/asf/hadoop/hbase/trunk@643223 13f79535-47bb-0310-9956-ffa450edef68
on eachiteration, edits are aggregated up into the millions
M src/java/org/apache/hadoop/hbase/HLog.java
(splitLog): If an exception processing a split, catch it.
In finally, close and delete the split. Don't try retrying.
While in some circumstance, we might recover, its also
likely that we just get same exception again. If so, and
multiple files, we'll just accumulate edits until the
kingdom comes.
git-svn-id: https://svn.apache.org/repos/asf/hadoop/hbase/trunk@643142 13f79535-47bb-0310-9956-ffa450edef68
M src/java/org/apache/hadoop/hbase/HStore.java
(Constructor) If an exception out of reconstructionLog method, log it and
keep going. Presumption is that its result of a lack of HADOOP--1700.
(reconstructionLog): Check for empty log file.
git-svn-id: https://svn.apache.org/repos/asf/hadoop/hbase/trunk@643110 13f79535-47bb-0310-9956-ffa450edef68
Don't log to stdout the fact that we're in safe mode...
only does it once so user thinks we've actually exited
safe mode when we could be stuck waiting on it. Force
user to look in logs.
git-svn-id: https://svn.apache.org/repos/asf/hadoop/hbase/trunk@640633 13f79535-47bb-0310-9956-ffa450edef68
-Changed MiniHBaseCluster to not start up a MiniDFS
-Changed HBaseClusterTestCase to do the work of starting up a MiniDFS.
-Added pre and post setup method to HBaseClusterTestCase so you can control what happen before MiniHBaseCluster is booted up
-Converted AbstractMergeTestCase to be a HBaseClusterTestCase
-Converted any test that used a raw MIniDFS or MiniHBaseCluster to use HBaseClusterTestCase instead
-Split TestTimestamp into two tests - one for clientside (now in o.a.h.h.client) and one for serverside (o.a.h.h.regionserver)
-Merged in Stack's changes to make bin/hbase have hadoop jars first on the classpath
-Updated PerformanceEvaluation (in --miniCluster mode) to start up a DFS first
-Fixed a bug in BaseScanner that would have allowed NPEs to be generated
git-svn-id: https://svn.apache.org/repos/asf/hadoop/hbase/trunk@640526 13f79535-47bb-0310-9956-ffa450edef68