This implementation is almost surely incorrect. Personality
initialization parses the `--hadoop-profile` argument and sets
`HADOOP_PROFILE`. That value is then used to build an `extras` value
that is passed along to module initialization. I'm guessing that the
`extras` value need to be honored down in the shadedjars module. I'm
not clear on how to make that work (need to study the interfaces at
play here), so taking the more ham-handed approach of referring to
`HADOOP_PROFILE`. I'm not sure if this will even work, or if it will
only work because the `foo_yetus.sh` scripts happen to use a variable
of the same name.
Signed-off-by: Jan Hentschel <jan.hentschel@ultratendency.com>
Signed-off-by: stack <stack@apache.org>
Rebuild our Dockerfile with support for multiple JDK versions. Use
multiple stages in the Jenkinsfile instead of yetus's multijdk because
of YETUS-953. Run those multiple stages in parallel to speed up
results.
Note that multiple stages means multiple Yetus invocations means
multiple comments on the PreCommit. This should become more obvious to
users once we can make use of GitHub Checks API, HBASE-23902.
closes#1183
Signed-off-by: Sean Busbey <busbey@apache.org>
Conflicts:
dev-support/Jenkinsfile_GitHub
dev-support/hbase-personality.sh
I saw this over on
https://builds.apache.org/view/H-L/view/HBase/job/HBase%20Nightly/job/branch-2/2447/console. Looks
like we need to bump the memory allocation for maven. I wonder if this
is the underlying cause of HBASE-22470.
```
6:38:47 ============================================================================
16:38:47 ============================================================================
16:38:47 Finished build.
16:38:47 ============================================================================
16:38:47 ============================================================================
16:38:47
16:38:47
Post stage
[Pipeline] stash
16:38:48 Warning: overwriting stash 'hadoop2-result'
16:38:48 Stashed 1 file(s)
[Pipeline] junit
16:38:48 Recording test results
16:38:54 Remote call on H2 failed
Error when executing always post condition:
java.io.IOException: Remote call on H2 failed
at hudson.remoting.Channel.call(Channel.java:963)
at hudson.FilePath.act(FilePath.java:1072)
at hudson.FilePath.act(FilePath.java:1061)
at hudson.tasks.junit.JUnitParser.parseResult(JUnitParser.java:114)
at hudson.tasks.junit.JUnitResultArchiver.parse(JUnitResultArchiver.java:137)
at hudson.tasks.junit.JUnitResultArchiver.parseAndAttach(JUnitResultArchiver.java:167)
at hudson.tasks.junit.pipeline.JUnitResultsStepExecution.run(JUnitResultsStepExecution.java:52)
at hudson.tasks.junit.pipeline.JUnitResultsStepExecution.run(JUnitResultsStepExecution.java:25)
at org.jenkinsci.plugins.workflow.steps.SynchronousNonBlockingStepExecution.lambda$start$0(SynchronousNonBlockingStepExecution.java:47)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.OutOfMemoryError: Java heap space
at com.sun.org.apache.xerces.internal.util.XMLStringBuffer.append(XMLStringBuffer.java:208)
at com.sun.org.apache.xerces.internal.impl.XMLEntityScanner.scanData(XMLEntityScanner.java:1515)
at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanCDATASection(XMLDocumentFragmentScannerImpl.java:1654)
at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl$FragmentContentDriver.next(XMLDocumentFragmentScannerImpl.java:3014)
at com.sun.org.apache.xerces.internal.impl.XMLDocumentScannerImpl.next(XMLDocumentScannerImpl.java:602)
at com.sun.org.apache.xerces.internal.impl.XMLNSDocumentScannerImpl.next(XMLNSDocumentScannerImpl.java:112)
at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanDocument(XMLDocumentFragmentScannerImpl.java:505)
at com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:842)
at com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:771)
at com.sun.org.apache.xerces.internal.parsers.XMLParser.parse(XMLParser.java:141)
at com.sun.org.apache.xerces.internal.parsers.AbstractSAXParser.parse(AbstractSAXParser.java:1213)
at com.sun.org.apache.xerces.internal.jaxp.SAXParserImpl$JAXPSAXParser.parse(SAXParserImpl.java:643)
at org.dom4j.io.SAXReader.read(SAXReader.java:465)
at org.dom4j.io.SAXReader.read(SAXReader.java:343)
at hudson.tasks.junit.SuiteResult.parse(SuiteResult.java:178)
at hudson.tasks.junit.TestResult.parse(TestResult.java:348)
at hudson.tasks.junit.TestResult.parsePossiblyEmpty(TestResult.java:281)
at hudson.tasks.junit.TestResult.parse(TestResult.java:206)
at hudson.tasks.junit.TestResult.parse(TestResult.java:178)
at hudson.tasks.junit.TestResult.<init>(TestResult.java:143)
at hudson.tasks.junit.JUnitParser$ParseResultCallable.invoke(JUnitParser.java:146)
at hudson.tasks.junit.JUnitParser$ParseResultCallable.invoke(JUnitParser.java:118)
at hudson.FilePath$FileCallableWrapper.call(FilePath.java:3052)
at hudson.remoting.UserRequest.perform(UserRequest.java:212)
at hudson.remoting.UserRequest.perform(UserRequest.java:54)
at hudson.remoting.Request$2.run(Request.java:369)
at hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorService.java:72)
... 4 more
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // node
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
16:38:54 Failed in branch yetus jdk8 hadoop2 checks
```
Signed-off-by: stack <stack@apache.org>
Set the fork count for first and second parts to be 0.5C. Add a bit of
doc too on this as well as some qualification on our test categories.
Also adds -T0.5C to MAVEN_ARGS in the hbase personality.
Decouple the HBase internals such that someone can implement
their own SASL-based authentication mechanism and plug it into
HBase RegionServers/Masters.
Comes with a design doc in dev-support/design-docs and an example in
hbase-examples known as "Shade" which uses a flat-password file
for authenticating users.
Closes#884
Signed-off-by: Wellington Chevreuil <wchevreuil@apache.org>
Signed-off-by: Andrew Purtell <apurtell@apache.org>
Signed-off-by: Reid Chan <reidchan@apache.org>
master/branches-2 specific changes: work around yetus overwriting JAVA_HOME
in the container with the host JAVA_HOME.
Signed-off-by: Nick Dimiduk <ndimiduk@apache.org>
(cherry picked from commit 41990ba20a)
(cherry picked from commit bcad0d9f98)
HBase, a Hadoop related project, must use the Hadoop label please.
This build, and others are starving the 'ubuntu' label which other projects need to use.
Signed-off-by: Duo Zhang <zhangduo@apache.org>
* wget should use timestamps to avoid re-downloading RC artifacts we already have.
* allow changing to maven profiles other than runAllTests for test step
Signed-off-by: Sean Busbey <busbey@apache.org>
(cherry picked from commit 74fb2040ea)
ADDENDUM: Remove exception for scala files in findbugs now that we don't have any.
Signed-off-by: Sean Busbey <busbey@apache.org>
Signed-off-by: Peter Somogyi <psomogyi@cloudera.com>
(cherry picked from commit 2792253322)
(cherry picked from commit 54172c9890)
* gather up all the flaky test stuff into a directory
* create Jenkins Pipeline DSL for the report generation and the flaky re-testing
* have the nightly per-branch job consume the results of flaky reporting
Signed-off-by: Mike Drob <mdrob@apache.org>
Cannot go to latest (8.9) yet due to
https://github.com/checkstyle/checkstyle/issues/5279
* move hbaseanti import checks to checkstyle
* implment a few missing equals checks, and ignore one
* fix lots of javadoc errors
Signed-off-by: Sean Busbey <busbey@apache.org>
* in a post-step, build status can either be "null" or "SUCCESS" to indicate success.
* before we do an scm checkout for stages that post to the comment, set a default "we failed ¯\_(ツ)_/¯" comment.
Signed-off-by: Michael Stack <stack@apache.org>
Signed-off-by: Mike Drob <mdrob@apache.org>
* Ensure Jenkins steps that invoke bash inline set -e
* machine stats script should check that passed directory will work
Signed-off-by: Michael Stack <stack@apache.org>
* add a new test to build the refguide specifically instead of site
* check for changes to src/main/asciidoc or src/main/xslt and run that test and only that test
* check for changes to the hbase-default.xml file and build the refguide if found (but maybe other tests too)
* fallback to relying on the yetus default for other changes
* fix some missing start_clock entries that cause longer-than-actual reported test time.
Signed-off-by: Mike Drob <mdrob@apache.org>
* do a scm checkout on the stages that need access to source.
* ensure our install job runs on the ubuntu label
* copy jira comments to main workspace
* simplify the jira comment
Signed-off-by: Michael Stack <stack@apache.org>
- rely on parallel pipeline to ensure all stages always run
- define non-CPS jira commenting function
- comment on jiras in the changeset with summary and links
Signed-off-by: Mike Drob <mdrob@apache.org>
Signed-off-by: Michael Stack <stack@apache.org>
* rely on git plumbing commands when checking if we've built the site for a particular commit already
* switch to forcing '-e' for bash
* add command line switches for: path to hbase, working directory, and publishing
* only export JAVA/MAVEN HOME if they aren't already set.
* add some docs about assumptions
* Update javadoc plugin to consistently be version 3.0.0
* avoid duplicative site invocations on reactor modules
* update use of cp command so it works both on linux and mac
* manually skip enforcer plugin during build
* still doing install of all jars due to MJAVADOC-490, but then skip rebuilding during aggregate reports.
* avoid the pager on git-diff by teeing to a log file, which also helps later reviewing in the case of big changesets.
Signed-off-by: Michael Stack <stack@apache.org>
Signed-off-by: Misty Stanley-Jones <misty@apache.org>
Conflicts:
hbase-backup/pom.xml
hbase-spark-it/pom.xml
While git rev-parse, sometimes the branch cannot be found unless
the remote is specified. This fix tries to use "origin" if the
remote is not specified and the branch is not found.
Signed-off-by: Sean Busbey <busbey@apache.org>
M dev-support/make_rc.sh
Disable checkstyle building site. Its an issue being fixed over in HBASE-19780
M hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java
The clusterid was being set into the process only after the
regionserver registers with the Master. That can be too late for some
test clients in particular. e.g. TestZKAsyncRegistry needs it as soon
as it goes to run which could be before Master had called its run
method which is regionserver run method which then calls back to the
master to register itself... and only then do we set the clusterid.
HBASE-19694 changed start order which made it so this test failed.
Setting the clusterid right after we set it in zk makes the test pass.
Another change was that backup masters were not going down on stop.
Backup masters were sleeping for the default zk period which is 90
seconds. They were not being woken up to check for stop. On stop
master now tells active master manager.
M hbase-server/src/test/java/org/apache/hadoop/hbase/TestJMXConnectorServer.java
Prevent creation of acl table. Messes up our being able to go down
promptly.
M hbase-server/src/test/java/org/apache/hadoop/hbase/master/balancer/TestRegionsOnMasterOptions.java
M hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestMultiParallel.java
M hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestRegionServerReadRequestMetrics.java
Disabled for now because it wants to run with regions on the Master...
currently broke!
M hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestZKAsyncRegistry.java
Add a bit of debugging.
M hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestDLSAsyncFSWAL.java
Disabled. Fails 40% of the time.
M hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestDLSFSHLog.java
Disabled. Fails 33% of the time.
Disabled stochastic load balancer for favored nodes because it fails on
occasion and we are not doing favored nodes in branch-2.
Jenkins fails the whole build immediately if any stage fails. Hadoop2 tests run before Hadoop3 tests.
So Hadoop3 tests will run only if hadoop2 tests pass.