Make it so our published poms carry the minimum needed to run
an hbase; the published pom has no profiles -- the profiles
specified at build time are resolved, their dependencies inlined,
and then they are stripped -- and no build-time, or plugins
dependencies or properties, etc. Resultant poms have explicit
hadoop lib versions baked in -- no more being able to choose
hbase with hadoop2 or haddop3 at downstream build time by setting
a '-Dhadoop.profile=X.0'.
Pattern is to add profiles when none in sub-modules when
the flatten plugin complains it can't resolve an hadoop
dependency's 'version' (e.g. hadoop-common, hadoop-hdfs).
Adding the hadoop-2.0 and hadoop-3.0 profiles in the sub-module
make it so the flatten plugin can figure 'hadoop.version'
definitively.
Another spin on the above happens when profiles already exist
in submodule but the flatten plugin is complaining it can't
figure figure version on an hadoop dependency NOT under
profiles. Below, we move the delinquent hadoop dependency under
existing profiles (minikdc was the usual dependency outside
profiles in sub-modules that flatten complained about).
Sometimes, moving an hadoop dependency under a profile, there
would be excludes on the local dependency. If the parent pom
excludes section was missing the local excludes, we added them
up to the parent module so all excluding is done up there in
the parent profile dependencyManagement section.
* hbase-asyncfs/pom.xml
* hbase-endpoint/pom.xml
* hbase-examples/pom.xml
* hbase-http/pom.xml
* hbase-rest/pom.xml
* hbase-server/pom.xml
Move the minikdc under profiles so it picks up appropriate hadoop version
when the flatten plugin runs.
* hbase-hadoop2-compat/pom.xml
Add hadoop2 and hadoop3 profiles and move hadoop-common, etc.
under them so we pick up appropriate hadoop version when flatten
plugin runs.
* hbase-mapreduce/pom.xml
Move hadoop dependencies under profiles so right version is
available when the flatten plugin runs.
* hbase-shaded/hbase-shaded-testing-util/pom.xml
Add profiles for hadoop-2.0 and hadoop-3.0 and move the
hadoop dependencies under them.
pom.xml
Add the flatten plugin with the flatten profiles enabled.
Add a few excludes on hadoop profiles picked up from sub-modules.
E.g. exclude bouncycastle bcprov-jdk15 when we include minikdc.
Signed-off-by: Andrew Purtell <apurtell@apache.org>
Signed-off-by: Duo Zhang <zhangduo@apache.org>
This is no longer needed since we've transitioned to the shaded Jersey shipped in
hbase-thirdparty. Also drop supplemental models entry.
Signed-off-by: Duo Zhang <zhangduo@apache.org>
Signed-off-by: Andrew Purtell <apurtell@apache.org>
This is a demonstration of visualization of regions on the cluster. The visualization is a stacked
bar chart showing total storefile size per table per region server, with the x-axis being server
names, the y-axis being storfile size, and the bars stacked per table. The visualization is
generated entirely on the fly from within the browser, implemented using Vega Lite. So far, Vega
appears to handle rendering this visualization for a cluster of over 700 region servers with
approximately 300,000 regions.
Per [0], include an update to the top-level LICENSE.txt. Also update LICENSE files in all binary
distributions (i.e., jars), by way of LICENSE.vm. Vega uses a BSD 3-clause variant without
advertising clause, and as such is a "Category A" license, per [1].
No changes are made to the NOTICE files, as per the existing example of bundling the minified
JQuery, which is also a Category A license.
[0]: https://infra.apache.org/licensing-howto.html
[1]: https://www.apache.org/legal/resolved.html#category-a
Signed-off-by: Andrew Purtell <apurtell@apache.org>
Publishes a set of JSON endpoints following a RESTful structure, which expose a subset of the
`o.a.h.h.ClusterMetrics` object tree. The URI structure is as follows
/api/v1/admin/cluster_metrics
/api/v1/admin/cluster_metrics/live_servers
/api/v1/admin/cluster_metrics/dead_servers
Signed-off-by: Sean Busbey <busbey@apache.org>
Signed-off-by: Andrew Purtell <apurtell@apache.org>
Avoid the pattern where a Random object is allocated, used once or twice, and
then left for GC. This pattern triggers warnings from some static analysis tools
because this pattern leads to poor effective randomness. In a few cases we were
legitimately suffering from this issue; in others a change is still good to
reduce noise in analysis results.
Use ThreadLocalRandom where there is no requirement to set the seed to gain
good reuse.
Where useful relax use of SecureRandom to simply Random or ThreadLocalRandom,
which are unlikely to block if the system entropy pool is low, if we don't need
crypographically strong randomness for the use case. The exception to this is
normalization of use of Bytes#random to fill byte arrays with randomness.
Because Bytes#random may be used to generate key material it must be backed by
SecureRandom.
Signed-off-by: Duo Zhang <zhangduo@apache.org>
When starting a jetty http server, one can explicitly exclude certain (unsecure)
SSL cipher suites. This can be especially important, when the HBase cluster
needs to be compliant with security regulations (e.g. FIPS).
Currently it is possible to set the excluded ciphers for the ThriftServer
("hbase.thrift.ssl.exclude.cipher.suites") or for the RestServer
("hbase.rest.ssl.exclude.cipher.suites"), but one can not configure it for the
regular InfoServer started by e.g. the master or region servers.
In this commit I want to introduce a new configuration
"ssl.server.exclude.cipher.list" to configure the excluded cipher suites for the
http server started by the InfoServer. This parameter has the same name and will
work in the same way, as it was already implemented in hadoop (e.g. for hdfs/yarn).
See: HADOOP-12668, HADOOP-14341
Co-authored-by: Mate Szalay-Beko <symat@apache.com>
Signed-off-by: Peter Somogyi <psomogyi@apache.org>
HBASE-25267 Make SSL keystore type configurable in HBase RESTServer
In this patch I want to introduce the hbase.rest.ssl.keystore.type parameter,
enabling us to customize the keystore type for the REST server. If the
parameter is not provided, then we should fall-back to the current behaviour
(which assumes keystore type JKS).
This is similar to how we already configure the InfoServer objects with the
ssl.server.keystore.type parameter to set up HTTPS for the various admin UIs.
Signed-off-by: Wellington Chevreuil <wellington.chevreuil@gmail.com>
Signed-off-by: Balazs Meszaros <meszibalu@apache.org>
Signed-off-by: Sean Busbey <busbey@apache.org>