After a bit of research into [0] and [1], and a bit of
experimentation, it seems we can use a partial wild-card expression
for these version strings. Let's try this for now. If it works out, we
should expand this usage to all the version package numbers, pinning
them to their epic:upstream-version components.
[0]: http://manpages.ubuntu.com/manpages/xenial/en/man5/deb-version.5.html
[1]: http://manpages.ubuntu.com/manpages/xenial/man8/apt-get.8.html
Signed-off-by: Sean Busbey <busbey@apache.org>
Signed-off-by: Duo Zhang <zhangduo@apache.org>
On 2.x branches, we need to explicitly activate profiles for H3. On
master, all H2 support is dropped which means no special profiles are
required for H3 (though, there is still a profile there to encapsulate
H3 logic).
We need to make sure that the yetus invocation can correctly pass down
any profile information into the personality, so we activate the exact
profiles we want.
Closes#1609
Co-authored-by: Istvan Toth <stoty@cloudera.com>
Signed-off-by: stack <stack@apache.org>
Pass --threads=2 to mvn when yetus runs so some parallelism
when dependencies allow.
Signed-off-by: Nick Dimiduk <ndimiduk@apache.org>
Signed-off-by: Viraj Jasani <vjasani@apache.org>
Down jdk8 forked jvm heap from 2800 to 2200 and the jdk11 heap from
3200 to 2200. Down the mvn size from 4G to 3.6G
Change how many puts done by TestMultiRespectsLimits because made
the test run the forked heap over 2.5G in size.
Signed-off-by: Sean Busbey <busbey@apache.org>
This implementation is almost surely incorrect. Personality
initialization parses the `--hadoop-profile` argument and sets
`HADOOP_PROFILE`. That value is then used to build an `extras` value
that is passed along to module initialization. I'm guessing that the
`extras` value need to be honored down in the shadedjars module. I'm
not clear on how to make that work (need to study the interfaces at
play here), so taking the more ham-handed approach of referring to
`HADOOP_PROFILE`. I'm not sure if this will even work, or if it will
only work because the `foo_yetus.sh` scripts happen to use a variable
of the same name.
Signed-off-by: Jan Hentschel <jan.hentschel@ultratendency.com>
Signed-off-by: stack <stack@apache.org>
Rebuild our Dockerfile with support for multiple JDK versions. Use
multiple stages in the Jenkinsfile instead of yetus's multijdk because
of YETUS-953. Run those multiple stages in parallel to speed up
results.
Note that multiple stages means multiple Yetus invocations means
multiple comments on the PreCommit. This should become more obvious to
users once we can make use of GitHub Checks API, HBASE-23902.
closes#1183
Signed-off-by: Sean Busbey <busbey@apache.org>
Conflicts:
dev-support/Jenkinsfile_GitHub
dev-support/hbase-personality.sh
I saw this over on
https://builds.apache.org/view/H-L/view/HBase/job/HBase%20Nightly/job/branch-2/2447/console. Looks
like we need to bump the memory allocation for maven. I wonder if this
is the underlying cause of HBASE-22470.
```
6:38:47 ============================================================================
16:38:47 ============================================================================
16:38:47 Finished build.
16:38:47 ============================================================================
16:38:47 ============================================================================
16:38:47
16:38:47
Post stage
[Pipeline] stash
16:38:48 Warning: overwriting stash 'hadoop2-result'
16:38:48 Stashed 1 file(s)
[Pipeline] junit
16:38:48 Recording test results
16:38:54 Remote call on H2 failed
Error when executing always post condition:
java.io.IOException: Remote call on H2 failed
at hudson.remoting.Channel.call(Channel.java:963)
at hudson.FilePath.act(FilePath.java:1072)
at hudson.FilePath.act(FilePath.java:1061)
at hudson.tasks.junit.JUnitParser.parseResult(JUnitParser.java:114)
at hudson.tasks.junit.JUnitResultArchiver.parse(JUnitResultArchiver.java:137)
at hudson.tasks.junit.JUnitResultArchiver.parseAndAttach(JUnitResultArchiver.java:167)
at hudson.tasks.junit.pipeline.JUnitResultsStepExecution.run(JUnitResultsStepExecution.java:52)
at hudson.tasks.junit.pipeline.JUnitResultsStepExecution.run(JUnitResultsStepExecution.java:25)
at org.jenkinsci.plugins.workflow.steps.SynchronousNonBlockingStepExecution.lambda$start$0(SynchronousNonBlockingStepExecution.java:47)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.OutOfMemoryError: Java heap space
at com.sun.org.apache.xerces.internal.util.XMLStringBuffer.append(XMLStringBuffer.java:208)
at com.sun.org.apache.xerces.internal.impl.XMLEntityScanner.scanData(XMLEntityScanner.java:1515)
at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanCDATASection(XMLDocumentFragmentScannerImpl.java:1654)
at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl$FragmentContentDriver.next(XMLDocumentFragmentScannerImpl.java:3014)
at com.sun.org.apache.xerces.internal.impl.XMLDocumentScannerImpl.next(XMLDocumentScannerImpl.java:602)
at com.sun.org.apache.xerces.internal.impl.XMLNSDocumentScannerImpl.next(XMLNSDocumentScannerImpl.java:112)
at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanDocument(XMLDocumentFragmentScannerImpl.java:505)
at com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:842)
at com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:771)
at com.sun.org.apache.xerces.internal.parsers.XMLParser.parse(XMLParser.java:141)
at com.sun.org.apache.xerces.internal.parsers.AbstractSAXParser.parse(AbstractSAXParser.java:1213)
at com.sun.org.apache.xerces.internal.jaxp.SAXParserImpl$JAXPSAXParser.parse(SAXParserImpl.java:643)
at org.dom4j.io.SAXReader.read(SAXReader.java:465)
at org.dom4j.io.SAXReader.read(SAXReader.java:343)
at hudson.tasks.junit.SuiteResult.parse(SuiteResult.java:178)
at hudson.tasks.junit.TestResult.parse(TestResult.java:348)
at hudson.tasks.junit.TestResult.parsePossiblyEmpty(TestResult.java:281)
at hudson.tasks.junit.TestResult.parse(TestResult.java:206)
at hudson.tasks.junit.TestResult.parse(TestResult.java:178)
at hudson.tasks.junit.TestResult.<init>(TestResult.java:143)
at hudson.tasks.junit.JUnitParser$ParseResultCallable.invoke(JUnitParser.java:146)
at hudson.tasks.junit.JUnitParser$ParseResultCallable.invoke(JUnitParser.java:118)
at hudson.FilePath$FileCallableWrapper.call(FilePath.java:3052)
at hudson.remoting.UserRequest.perform(UserRequest.java:212)
at hudson.remoting.UserRequest.perform(UserRequest.java:54)
at hudson.remoting.Request$2.run(Request.java:369)
at hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorService.java:72)
... 4 more
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // node
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
16:38:54 Failed in branch yetus jdk8 hadoop2 checks
```
Signed-off-by: stack <stack@apache.org>
Set the fork count for first and second parts to be 0.5C. Add a bit of
doc too on this as well as some qualification on our test categories.
Also adds -T0.5C to MAVEN_ARGS in the hbase personality.
Decouple the HBase internals such that someone can implement
their own SASL-based authentication mechanism and plug it into
HBase RegionServers/Masters.
Comes with a design doc in dev-support/design-docs and an example in
hbase-examples known as "Shade" which uses a flat-password file
for authenticating users.
Closes#884
Signed-off-by: Wellington Chevreuil <wchevreuil@apache.org>
Signed-off-by: Andrew Purtell <apurtell@apache.org>
Signed-off-by: Reid Chan <reidchan@apache.org>
master/branches-2 specific changes: work around yetus overwriting JAVA_HOME
in the container with the host JAVA_HOME.
Signed-off-by: Nick Dimiduk <ndimiduk@apache.org>
(cherry picked from commit 41990ba20a)
(cherry picked from commit bcad0d9f98)
HBase, a Hadoop related project, must use the Hadoop label please.
This build, and others are starving the 'ubuntu' label which other projects need to use.
Signed-off-by: Duo Zhang <zhangduo@apache.org>
* wget should use timestamps to avoid re-downloading RC artifacts we already have.
* allow changing to maven profiles other than runAllTests for test step
Signed-off-by: Sean Busbey <busbey@apache.org>
(cherry picked from commit 74fb2040ea)