FINAL ADDENDUM. Removes changes to dev-support/hbase-personality
leaving it as it was. All else about HBASE-23779 change remains
changing surefire fork counts to be dependent on cpu count.
ADDENDUM: Refactor that comes of discussion up on https://github.com/apache/yetus/pull/86
because what I committed originally, and amended in a subsequent
ADDENDUM is not taking effect.
Set the fork count for first and second parts to be 0.5C. Add a bit of
doc too on this as well as some qualification on our test categories.
Also adds -T0.5C to MAVEN_ARGS in the hbase personality.
I saw this over on
https://builds.apache.org/view/H-L/view/HBase/job/HBase%20Nightly/job/branch-2/2447/console. Looks
like we need to bump the memory allocation for maven. I wonder if this
is the underlying cause of HBASE-22470.
```
6:38:47 ============================================================================
16:38:47 ============================================================================
16:38:47 Finished build.
16:38:47 ============================================================================
16:38:47 ============================================================================
16:38:47
16:38:47
Post stage
[Pipeline] stash
16:38:48 Warning: overwriting stash 'hadoop2-result'
16:38:48 Stashed 1 file(s)
[Pipeline] junit
16:38:48 Recording test results
16:38:54 Remote call on H2 failed
Error when executing always post condition:
java.io.IOException: Remote call on H2 failed
at hudson.remoting.Channel.call(Channel.java:963)
at hudson.FilePath.act(FilePath.java:1072)
at hudson.FilePath.act(FilePath.java:1061)
at hudson.tasks.junit.JUnitParser.parseResult(JUnitParser.java:114)
at hudson.tasks.junit.JUnitResultArchiver.parse(JUnitResultArchiver.java:137)
at hudson.tasks.junit.JUnitResultArchiver.parseAndAttach(JUnitResultArchiver.java:167)
at hudson.tasks.junit.pipeline.JUnitResultsStepExecution.run(JUnitResultsStepExecution.java:52)
at hudson.tasks.junit.pipeline.JUnitResultsStepExecution.run(JUnitResultsStepExecution.java:25)
at org.jenkinsci.plugins.workflow.steps.SynchronousNonBlockingStepExecution.lambda$start$0(SynchronousNonBlockingStepExecution.java:47)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.OutOfMemoryError: Java heap space
at com.sun.org.apache.xerces.internal.util.XMLStringBuffer.append(XMLStringBuffer.java:208)
at com.sun.org.apache.xerces.internal.impl.XMLEntityScanner.scanData(XMLEntityScanner.java:1515)
at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanCDATASection(XMLDocumentFragmentScannerImpl.java:1654)
at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl$FragmentContentDriver.next(XMLDocumentFragmentScannerImpl.java:3014)
at com.sun.org.apache.xerces.internal.impl.XMLDocumentScannerImpl.next(XMLDocumentScannerImpl.java:602)
at com.sun.org.apache.xerces.internal.impl.XMLNSDocumentScannerImpl.next(XMLNSDocumentScannerImpl.java:112)
at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanDocument(XMLDocumentFragmentScannerImpl.java:505)
at com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:842)
at com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:771)
at com.sun.org.apache.xerces.internal.parsers.XMLParser.parse(XMLParser.java:141)
at com.sun.org.apache.xerces.internal.parsers.AbstractSAXParser.parse(AbstractSAXParser.java:1213)
at com.sun.org.apache.xerces.internal.jaxp.SAXParserImpl$JAXPSAXParser.parse(SAXParserImpl.java:643)
at org.dom4j.io.SAXReader.read(SAXReader.java:465)
at org.dom4j.io.SAXReader.read(SAXReader.java:343)
at hudson.tasks.junit.SuiteResult.parse(SuiteResult.java:178)
at hudson.tasks.junit.TestResult.parse(TestResult.java:348)
at hudson.tasks.junit.TestResult.parsePossiblyEmpty(TestResult.java:281)
at hudson.tasks.junit.TestResult.parse(TestResult.java:206)
at hudson.tasks.junit.TestResult.parse(TestResult.java:178)
at hudson.tasks.junit.TestResult.<init>(TestResult.java:143)
at hudson.tasks.junit.JUnitParser$ParseResultCallable.invoke(JUnitParser.java:146)
at hudson.tasks.junit.JUnitParser$ParseResultCallable.invoke(JUnitParser.java:118)
at hudson.FilePath$FileCallableWrapper.call(FilePath.java:3052)
at hudson.remoting.UserRequest.perform(UserRequest.java:212)
at hudson.remoting.UserRequest.perform(UserRequest.java:54)
at hudson.remoting.Request$2.run(Request.java:369)
at hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorService.java:72)
... 4 more
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // node
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
16:38:54 Failed in branch yetus jdk8 hadoop2 checks
```
Signed-off-by: stack <stack@apache.org>
Codify building the summary of what's new on a release line
branch (i.e., `branch-2`), but not yet released on earlier release
branches of that line.
Builds a cvs report that looks like https://home.apache.org/~ndimiduk/new_for_branch-2.csv
* HBASE-22853 Git/Jira Release Audit Tool
This is an application for performing an audit between the histories
on our git branches and the `fixVersion` field set on issues in
JIRA. It does this by building a Sqlite database from the commits
found on each git branch, identifying Jira IDs and release tags, and
then requesting information about those issues from Jira. Once both
sources have been collected, queries can be performed against the
database to look for discrepancies between the sources of truth (and,
possibly, bugs in this script).
Signed-off-by: Sean Busbey <busbey@apache.org>
Signed-off-by: Josh Elser <elserj@apache.org>
Signed-off-by: Viraj Jasani <vjasani@apache.org>
Decouple the HBase internals such that someone can implement
their own SASL-based authentication mechanism and plug it into
HBase RegionServers/Masters.
Comes with a design doc in dev-support/design-docs and an example in
hbase-examples known as "Shade" which uses a flat-password file
for authenticating users.
Closes#884
Signed-off-by: Wellington Chevreuil <wchevreuil@apache.org>
Signed-off-by: Andrew Purtell <apurtell@apache.org>
Signed-off-by: Reid Chan <reidchan@apache.org>
- switch to nexus-staging-maven-plugin for asf-release
- refactor release-build to use mvn deploy and its output.
- cleaned up some tabs in the root pom
Signed-off-by: stack <stack@apache.org>
Make the scripts generic. Adds an option that allows you specify
'project'. Defaults to 'hbase' for core. Pass 'hbase-thirdparty'
or 'hbase-operator-tools' etc.
This commit includes a bunch of bugfixes and miscellaneous
that came of trying to use the scripts making RCs.
Signed-off-by: Peter Somogyi <psomogyi@apache.org>
master/branches-2 specific changes: work around yetus overwriting JAVA_HOME
in the container with the host JAVA_HOME.
Signed-off-by: Nick Dimiduk <ndimiduk@apache.org>
(cherry picked from commit 41990ba20a)
HBase, a Hadoop related project, must use the Hadoop label please.
This build, and others are starving the 'ubuntu' label which other projects need to use.
Signed-off-by: Duo Zhang <zhangduo@apache.org>
* wget should use timestamps to avoid re-downloading RC artifacts we already have.
* allow changing to maven profiles other than runAllTests for test step
Signed-off-by: Sean Busbey <busbey@apache.org>
These scripts came originally from spark [1]. They were then
modified to suit hbase context. Supercedes the old
../make_rc.sh script because what is here is more comprehensive
doing more steps of the RM process as well as running in a
container so the RM build environment can be a constant.
It:
* Tags release
* Updates RELEASENOTES.md and CHANGES.md.
* Sets version to the release version
* Sets version to next SNAPSHOT version.
* Builds, signs, and hashes all artifacts.
* Generates the API report.
* Pushes release tgzs to the dev dir in a apache dist.
* Pushes to repository.apache.org staging.
* Generates a vote email with filled-in fields.
The entry point is the do-release-docker.sh script. Pass -h to
see available options. For example, running below will do all
steps above using the 'rm' dir under Downloads as workspace:
$ ./do-release-docker.sh -d ~/Downloads/rm
1. https://github.com/apache/spark/tree/master/dev/create-release
Signed-off-by: Peter Somogyi <psomogyi@cloudera.com>
Signed-off-by: Duo Zhang <zhangduo@apache.org>