- Moves out o.a.h.h.{mapred, mapreduce} to new hbase-mapreduce module which depends
on hbase-server because of classes like *Snapshot{Input,Output}Format.java, WALs, replication, etc
- hbase-backup depends on it for WALPlayer and MR job stuff
- A bunch of tools needed to be pulled into hbase-mapreduce becuase of their dependencies on MR.
These are: CompactionTool, LoadTestTool, PerformanceEvaluation, ExportSnapshot
This is better place of them than hbase-server. But ideal place would be in separate hbase-tools module.
- There were some tests in hbase-server which were digging into these tools for static util funtions or
confs. Moved these to better/easily shared place. For eg. security related stuff to HBaseKerberosUtils.
- Note that hbase-mapreduce has secondPartExecution tests. On my machine they took like 20 min, so maybe
more on apache jenkins. That's basically equal reduction of runtime of hbase-server tests, which is a
big win!
Change-Id: Ieeb7235014717ca83ee5cb13b2a27fddfa6838e8
Contains the following commits:
HBASE-17748 Include HBase snapshots in space quotas
Introduces a new Chore in the Master which computes the size
of the snapshots included in a cluster. The size of these
snapshots are included in the table's which the snapshot was created
from HDFS usage.
Includes some test stabilization, trying to make the tests more
deterministic by ensuring we observe stable values as we know
that those values are mutable. This should help avoid problems
where size reports are delayed and we see an incomplete value.
HBASE-17752 Shell command to list snapshot sizes WRT quotas
HBASE-17840 Update hbase book to space quotas on snapshots
Selective add of dependency on hbase-thirdparty jars.
Update to READMEs on how protobuf is done (and update to refguide).
Removed all checked in generated protobuf files. They are generated
on the fly now as part of mainline build.
Update asciidoctor-maven-plugin to 1.5.5 and asciidoctorj-pdf to 1.5.0-alpha.15
asciidoctor's pdfmark generation is turned off
Modify title-logo tag to title-logo-image
hbase-thirdparty jars. Update to READMEs on how protobuf is done (and update to
refguide) Removed all checked in generated protobuf files. They are generatedon
the fly now as part of mainline build.
It should be the normal case that HBase automatically deletes
quotas for deleted tables. Switch the Observer to be on by
default and add an option to instead prevent it from being added.
When a table or namespace is deleted, it would be nice to automatically
delete the quota on said table/NS. It's possible that not all people
would want this functionality so we can leave it up to the user to
configure this Observer.
Reason for refactor:
In cases where one might need to use multiple observers, say region, master and regionserver; and the fact that only one class can be extended, it gives rise to following pattern:
public class BaseMasterAndRegionObserver
extends BaseRegionObserver
implements MasterObserver
class AccessController
extends BaseMasterAndRegionObserver
implements RegionServerObserver
were BaseMasterAndRegionObserver is full copy of BaseMasterObserver.
There is an example of simple case too where the current design fails.
Say only one observer is needed by the coprocessor, but the design doesn't permit extending even that single observer (see RSGroupAdminEndpoint), that leads to copy of full Bas
e...Observer class into coprocessor class leading to 1000s of lines of code and this ugly mix of 5 main functions with 100 useless functions.
Javadocs changes:
- Adds class comments on 'default' methods and expectations.
- Adds explanaiton of Exception handling in Observers' class comment. Removes redundant @throws before each function.
- Improves javadocs for a bunch of functions
- deletes empty @params in a bunch of places
Change-Id: I265738d47e8554e7b4678e88bb916a0cc7d00ab3
An edit that undoes warneings that standalone is not 'production ready'
and that local filesystem loses data (It doesn't anymore). Adds a
section on how to do standalone over hdfs.
Which includes
HBASE-16742 Add chapter for devs on how we do protobufs going forward
HBASE-16741 Amend the generate protobufs out-of-band build step
to include shade, pulling in protobuf source and a hook for patching protobuf
Removed ByteStringer from hbase-protocol-shaded. Use the protobuf-3.1.0
trick directly instead. Makes stuff cleaner. All under 'shaded' dir is
now generated.
HBASE-16567 Upgrade to protobuf-3.1.x
Regenerate all protos in this module with protoc3.
Redo ByteStringer to use new pb3.1.0 unsafebytesutil
instead of HBaseZeroCopyByteString
HBASE-16264 Figure how to deal with endpoints and shaded pb Shade our protobufs.
Do it in a manner that makes it so we can still have in our API references to
com.google.protobuf (and in REST). The c.g.p in API is for Coprocessor Endpoints (CPEP)
This patch is Tactic #4 from Shading Doc attached to the referenced issue.
Figuring an appoach took a while because we have Coprocessor Endpoints
mixed in with the core of HBase that are tough to untangle (FIX).
Tactic #4 (the fourth attempt at addressing this issue) is COPY all but
the CPEP .proto files currently in hbase-protocol to a new module named
hbase-protocol-shaded. Generate .protos again in the new location and
then relocate/shade the generated files. Let CPEPs keep on with the
old references at com.google.protobuf.* and
org.apache.hadoop.hbase.protobuf.* but change the hbase core so all
instead refer to the relocated files in their new location at
org.apache.hadoop.hbase.shaded.com.google.protobuf.*.
Let the new module also shade protobufs themselves and change hbase
core to pick up this shaded protobuf rather than directly reference
com.google.protobuf.
This approach allows us to explicitly refer to either the shaded or
non-shaded version of a protobuf class in any particular context (though
usually context dictates one or the other). Core runs on shaded protobuf.
CPEPs continue to use whatever is on the classpath with
com.google.protobuf.* which is pb2.5.0 for the near future at least.
See above cited doc for follow-ons and downsides. In short, IDEs will complain
about not being able to find the shaded protobufs since shading happens at package
time; will fix by checking in all generated classes and relocated protobuf in
a follow-on. Also, CPEPs currently suffer an extra-copy as marshalled from
non-shaded to shaded. To fix. Finally, our .protos are duplicated; once
shaded, and once not. Pain, but how else to reveal our protos to CPEPs or
C++ client that wants to talk with HBase AND shade protobuf.
Details:
Add a new hbase-protocol-shaded module. It is a copy of hbase-protocol
i with all relocated offset from o.a.h.h. to o.a.h.h.shaded. The new module
also includes the relocated pb. It does not include CPEPs. They stay in
their old location.
Add another module hbase-endpoint which has in it all the endpoints
that ship as part of hbase -- at least the ones that are not
entangled with core such as AccessControl and Auth. Move all protos
for these CPEPs here as well as their unit tests (mostly moving a
bunch of stuff out of hbase-server module)
Much of the change looks like this:
-import org.apache.hadoop.hbase.protobuf.ProtobufUtil;
-import org.apache.hadoop.hbase.protobuf.generated.ClusterIdProtos;
+import org.apache.hadoop.hbase.protobuf.shaded.ProtobufUtil;
+import org.apache.hadoop.hbase.shaded.protobuf.generated.ClusterIdProtos;
In HTable and in HBaseAdmin, regularize the way Callables are used and also hide
protobuf usage as much as possible moving it up into Callable super classes or out
to utility classes. Still TODO is adding in of retries, etc., but can wait on
procedure which will redo all this.
Also in HTable and HBaseAdmin as well as in HRegionServer and Server, be explicit
when using non-shaded protobuf. Do the full-path so it is clear. This is around
endpoint coprocessors registration of services and execution of CPEP methods.
Shrunk ProtobufUtil by moving methods used by one CPEP only back to the CPEP either
into Client class or as new Util class; e.g. AccessControlUtil.
There are actually two versions of ProtobufUtil now; a shaded one and a subset
that is used by CPEPs doing non-shaded work.
Made it so hbase-common no longer depends on hbase-protocol (with Matteo's help)
R*Converter classes got moved down under shaded package -- they are for internal
use only. There are no non-shaded versions of these classes.
D hbase-client/src/main/java/org/apache/hadoop/hbase/client/AbstractRegionServerCallable
D RetryingCallableBase
Not used anymore and we have too many tiers of Callables so removed/cleaned-up.
A ClientServicecallable
Had to add this one. RegionServerCallable was made generic so it could be used
for a few Interfaces (Client and Admin). Then added ClientServiceCallable to
implement RegionServerCallable with the Client Interface.
In some particular deployments, the Replication code believes it has
reached EOF for a WAL prior to succesfully parsing all bytes known to
exist in a cleanly closed file.
Consistently this failure happens due to an InvalidProtobufException
after some number of seeks during our attempts to tail the in-progress
RegionServer WAL. As a work-around, this patch treats cleanly closed
files differently than other execution paths. If an EOF is detected due
to parsing or other errors while there are still unparsed bytes before
the end-of-file trailer, we now reset the WAL to the very beginning and
attempt a clean read-through.
In current testing, a single such reset is sufficient to work around
observed dataloss. However, the above change will retry a given WAL file
indefinitely. On each such attempt, a log message like the below will
be emitted at the WARN level:
Processing end of WAL file '{}'. At position {}, which is too far away
from reported file length {}. Restarting WAL reading (see HBASE-15983
for details).
Additionally, this patch adds some additional log detail at the TRACE
level about file offsets seen while handling recoverable errors. It also
add metrics that measure the use of this recovery mechanism.
4 parameters, "timestamp", "minTimestamp", "maxiTimestamp" and
"maxVersions" are added to HBaseSparkConf. Users can select a
timestamp, they can also select a time range with minimum timestamp and
maximum timestamp.
Signed-off-by: Sean Busbey <busbey@apache.org>
Signed-off-by: Ted Yu <tedyu@apache.org>
Signed-off-by: Jerry He <jerryjch@apache.org>
Some tables, links, and other output do not render right in the output,
either because of Asciidoc code mistakes or the wrong formatting
choices. Make improvements.
This is the ID of a Google Custom Search Engine. I configured this one to
search hbase.apache.org, issues.apache.org/browse/HBASE-*, and the
user and dev mailing lists.
We have an awesome community resource in our online book. It's maintained and looked after with
diligence. We also have an HBase section on the hadoop wiki that hasn't been updated since 2012.
Let's sift through the pages of the wiki, bring over any content that's still relevant and
not already present in the book, and kill the wiki.
* We now have apidocs, devapidocs, testapidocs, and testdevapidocs
* When using reportSets, the Javadoc plugin ignores the plugin configuration,
so I propagated it to the individual reportSet configuration blocks.
* I was able to remove some of the logic that moved / copied things around
in post-site.
* We now have xref and xref-test (these are superfluous but something needs them
so I left them)
* Added source to Javadocs -- you can click a method or property to browse its source.
More user-friendly than xref maybe.
* You can now get the whole site to build by doing 'mvn clean site site:stage' and
it takes about 10 minutes.
* We now have apidocs, devapidocs, testapidocs, and testdevapidocs
* When using reportSets, the Javadoc plugin ignores the plugin configuration,
so I propagated it to the individual reportSet configuration blocks.
* I was able to remove some of the logic that moved / copied things around
in post-site.
* We now have xref and xref-test (these are superfluous but something needs them
so I left them)
* Added source to Javadocs -- you can click a method or property to browse its source.
More user-friendly than xref maybe.
* You can now get the whole site to build by doing 'mvn clean site site:stage' and
it takes about 10 minutes.