Down jdk8 forked jvm heap from 2800 to 2200 and the jdk11 heap from
3200 to 2200. Down the mvn size from 4G to 3.6G
Change how many puts done by TestMultiRespectsLimits because made
the test run the forked heap over 2.5G in size.
Signed-off-by: Sean Busbey <busbey@apache.org>
Pass --threads=2 to mvn when yetus runs so some parallelism
when dependencies allow.
Signed-off-by: Nick Dimiduk <ndimiduk@apache.org>
Signed-off-by: Viraj Jasani <vjasani@apache.org>
Current code gives me the following; notice default values generated
for `RELEASE_VERSION` and `api_diff_tag`.
```
GIT_BRANCH [branch-2.3]:
Current branch VERSION is 2.3.0-SNAPSHOT.
RELEASE_VERSION [2.3.-1]: 2.3.0
NEXT_VERSION [2.3.0-SNAPSHOT]: 2.3.1-SNAPSHOT
RC_COUNT [0]:
GIT_REF [2.3.0RC0]:
api_diff_tag, [rel/2.2.0)]:
```
With this patch I get
```
GIT_BRANCH [branch-2.3]:
Current branch VERSION is 2.3.0-SNAPSHOT.
RELEASE_VERSION [2.3.0]:
NEXT_VERSION [2.3.1-SNAPSHOT]:
RC_COUNT [0]:
GIT_REF [2.3.0RC0]:
api_diff_tag, [rel/2.2.0]:
```
Signed-off-by: Michael Stack <stack@apache.org>
Signed-off-by: Jan Hentschel <jan.hentschel@ultratendency.com>
* enhancements to git_jira_release_audit.py
* add aforementioned release branch report
* include default values in help doc output
* swap default db to a file on disk instead of in memory
* set logger to match file name
* add separate sql query log at DEBUG level
* more detailed usage info in README.md, including example audit query
* update entries in fallback_actions.csv
Signed-off-by: stack <stack@apache.org>
Signed-off-by: Jan Hentschel <jan.hentschel@ultratendency.com>
This implementation is almost surely incorrect. Personality
initialization parses the `--hadoop-profile` argument and sets
`HADOOP_PROFILE`. That value is then used to build an `extras` value
that is passed along to module initialization. I'm guessing that the
`extras` value need to be honored down in the shadedjars module. I'm
not clear on how to make that work (need to study the interfaces at
play here), so taking the more ham-handed approach of referring to
`HADOOP_PROFILE`. I'm not sure if this will even work, or if it will
only work because the `foo_yetus.sh` scripts happen to use a variable
of the same name.
Signed-off-by: Jan Hentschel <jan.hentschel@ultratendency.com>
Signed-off-by: stack <stack@apache.org>
Rebuild our Dockerfile with support for multiple JDK versions. Use
multiple stages in the Jenkinsfile instead of yetus's multijdk because
of YETUS-953. Run those multiple stages in parallel to speed up
results.
Note that multiple stages means multiple Yetus invocations means
multiple comments on the PreCommit. This should become more obvious to
users once we can make use of GitHub Checks API, HBASE-23902.
closes#1183
Signed-off-by: Sean Busbey <busbey@apache.org>
* add design doc for original MOB changes as they were when HBase 2.0 came out
* add design doc for distributed MOB compaction
* remove configuration and commands no longer relevant after distributed MOB compaction
* add in discussion of configuration options
* allow asciimath formulas since we use them in the discussion
closes#1232
Signed-off-by: Wellington Ramos Chevreuil <wchevreuil@apache.org>
FINAL ADDENDUM. Removes changes to dev-support/hbase-personality
leaving it as it was. All else about HBASE-23779 change remains
changing surefire fork counts to be dependent on cpu count.
ADDENDUM: Refactor that comes of discussion up on https://github.com/apache/yetus/pull/86
because what I committed originally, and amended in a subsequent
ADDENDUM is not taking effect.
Set the fork count for first and second parts to be 0.5C. Add a bit of
doc too on this as well as some qualification on our test categories.
Also adds -T0.5C to MAVEN_ARGS in the hbase personality.
I saw this over on
https://builds.apache.org/view/H-L/view/HBase/job/HBase%20Nightly/job/branch-2/2447/console. Looks
like we need to bump the memory allocation for maven. I wonder if this
is the underlying cause of HBASE-22470.
```
6:38:47 ============================================================================
16:38:47 ============================================================================
16:38:47 Finished build.
16:38:47 ============================================================================
16:38:47 ============================================================================
16:38:47
16:38:47
Post stage
[Pipeline] stash
16:38:48 Warning: overwriting stash 'hadoop2-result'
16:38:48 Stashed 1 file(s)
[Pipeline] junit
16:38:48 Recording test results
16:38:54 Remote call on H2 failed
Error when executing always post condition:
java.io.IOException: Remote call on H2 failed
at hudson.remoting.Channel.call(Channel.java:963)
at hudson.FilePath.act(FilePath.java:1072)
at hudson.FilePath.act(FilePath.java:1061)
at hudson.tasks.junit.JUnitParser.parseResult(JUnitParser.java:114)
at hudson.tasks.junit.JUnitResultArchiver.parse(JUnitResultArchiver.java:137)
at hudson.tasks.junit.JUnitResultArchiver.parseAndAttach(JUnitResultArchiver.java:167)
at hudson.tasks.junit.pipeline.JUnitResultsStepExecution.run(JUnitResultsStepExecution.java:52)
at hudson.tasks.junit.pipeline.JUnitResultsStepExecution.run(JUnitResultsStepExecution.java:25)
at org.jenkinsci.plugins.workflow.steps.SynchronousNonBlockingStepExecution.lambda$start$0(SynchronousNonBlockingStepExecution.java:47)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.OutOfMemoryError: Java heap space
at com.sun.org.apache.xerces.internal.util.XMLStringBuffer.append(XMLStringBuffer.java:208)
at com.sun.org.apache.xerces.internal.impl.XMLEntityScanner.scanData(XMLEntityScanner.java:1515)
at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanCDATASection(XMLDocumentFragmentScannerImpl.java:1654)
at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl$FragmentContentDriver.next(XMLDocumentFragmentScannerImpl.java:3014)
at com.sun.org.apache.xerces.internal.impl.XMLDocumentScannerImpl.next(XMLDocumentScannerImpl.java:602)
at com.sun.org.apache.xerces.internal.impl.XMLNSDocumentScannerImpl.next(XMLNSDocumentScannerImpl.java:112)
at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanDocument(XMLDocumentFragmentScannerImpl.java:505)
at com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:842)
at com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:771)
at com.sun.org.apache.xerces.internal.parsers.XMLParser.parse(XMLParser.java:141)
at com.sun.org.apache.xerces.internal.parsers.AbstractSAXParser.parse(AbstractSAXParser.java:1213)
at com.sun.org.apache.xerces.internal.jaxp.SAXParserImpl$JAXPSAXParser.parse(SAXParserImpl.java:643)
at org.dom4j.io.SAXReader.read(SAXReader.java:465)
at org.dom4j.io.SAXReader.read(SAXReader.java:343)
at hudson.tasks.junit.SuiteResult.parse(SuiteResult.java:178)
at hudson.tasks.junit.TestResult.parse(TestResult.java:348)
at hudson.tasks.junit.TestResult.parsePossiblyEmpty(TestResult.java:281)
at hudson.tasks.junit.TestResult.parse(TestResult.java:206)
at hudson.tasks.junit.TestResult.parse(TestResult.java:178)
at hudson.tasks.junit.TestResult.<init>(TestResult.java:143)
at hudson.tasks.junit.JUnitParser$ParseResultCallable.invoke(JUnitParser.java:146)
at hudson.tasks.junit.JUnitParser$ParseResultCallable.invoke(JUnitParser.java:118)
at hudson.FilePath$FileCallableWrapper.call(FilePath.java:3052)
at hudson.remoting.UserRequest.perform(UserRequest.java:212)
at hudson.remoting.UserRequest.perform(UserRequest.java:54)
at hudson.remoting.Request$2.run(Request.java:369)
at hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorService.java:72)
... 4 more
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // node
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
16:38:54 Failed in branch yetus jdk8 hadoop2 checks
```
Signed-off-by: stack <stack@apache.org>
Codify building the summary of what's new on a release line
branch (i.e., `branch-2`), but not yet released on earlier release
branches of that line.
Builds a cvs report that looks like https://home.apache.org/~ndimiduk/new_for_branch-2.csv
* HBASE-22853 Git/Jira Release Audit Tool
This is an application for performing an audit between the histories
on our git branches and the `fixVersion` field set on issues in
JIRA. It does this by building a Sqlite database from the commits
found on each git branch, identifying Jira IDs and release tags, and
then requesting information about those issues from Jira. Once both
sources have been collected, queries can be performed against the
database to look for discrepancies between the sources of truth (and,
possibly, bugs in this script).
Signed-off-by: Sean Busbey <busbey@apache.org>
Signed-off-by: Josh Elser <elserj@apache.org>
Signed-off-by: Viraj Jasani <vjasani@apache.org>
Decouple the HBase internals such that someone can implement
their own SASL-based authentication mechanism and plug it into
HBase RegionServers/Masters.
Comes with a design doc in dev-support/design-docs and an example in
hbase-examples known as "Shade" which uses a flat-password file
for authenticating users.
Closes#884
Signed-off-by: Wellington Chevreuil <wchevreuil@apache.org>
Signed-off-by: Andrew Purtell <apurtell@apache.org>
Signed-off-by: Reid Chan <reidchan@apache.org>
- switch to nexus-staging-maven-plugin for asf-release
- refactor release-build to use mvn deploy and its output.
- cleaned up some tabs in the root pom
Signed-off-by: stack <stack@apache.org>
Make the scripts generic. Adds an option that allows you specify
'project'. Defaults to 'hbase' for core. Pass 'hbase-thirdparty'
or 'hbase-operator-tools' etc.
This commit includes a bunch of bugfixes and miscellaneous
that came of trying to use the scripts making RCs.
Signed-off-by: Peter Somogyi <psomogyi@apache.org>
master/branches-2 specific changes: work around yetus overwriting JAVA_HOME
in the container with the host JAVA_HOME.
Signed-off-by: Nick Dimiduk <ndimiduk@apache.org>
(cherry picked from commit 41990ba20a)
HBase, a Hadoop related project, must use the Hadoop label please.
This build, and others are starving the 'ubuntu' label which other projects need to use.
Signed-off-by: Duo Zhang <zhangduo@apache.org>
* wget should use timestamps to avoid re-downloading RC artifacts we already have.
* allow changing to maven profiles other than runAllTests for test step
Signed-off-by: Sean Busbey <busbey@apache.org>
These scripts came originally from spark [1]. They were then
modified to suit hbase context. Supercedes the old
../make_rc.sh script because what is here is more comprehensive
doing more steps of the RM process as well as running in a
container so the RM build environment can be a constant.
It:
* Tags release
* Updates RELEASENOTES.md and CHANGES.md.
* Sets version to the release version
* Sets version to next SNAPSHOT version.
* Builds, signs, and hashes all artifacts.
* Generates the API report.
* Pushes release tgzs to the dev dir in a apache dist.
* Pushes to repository.apache.org staging.
* Generates a vote email with filled-in fields.
The entry point is the do-release-docker.sh script. Pass -h to
see available options. For example, running below will do all
steps above using the 'rm' dir under Downloads as workspace:
$ ./do-release-docker.sh -d ~/Downloads/rm
1. https://github.com/apache/spark/tree/master/dev/create-release
Signed-off-by: Peter Somogyi <psomogyi@cloudera.com>
Signed-off-by: Duo Zhang <zhangduo@apache.org>