Signed-off-by: Nick Dimiduk <ndimiduk@apache.org>
Signed-off-by: Zach York <zyork@apache.org>
* TestFullLogReconstruction log the server we've chosen to expire and then note where we starting counting rows
* TestAsyncTableScanException use a define for row counts
* TestRawAsyncTableLimitedScanWithFilter check connection was made before closing it in tearDown
* TestLogsCleaner use single mod time. Make it for sure less than now in case test runs all in the same millisecond (would cause test fail)
* TestReplicationBase test table is non-null before closing in tearDown
FINAL ADDENDUM. Removes changes to dev-support/hbase-personality
leaving it as it was. All else about HBASE-23779 change remains
changing surefire fork counts to be dependent on cpu count.
ADDENDUM: Refactor that comes of discussion up on https://github.com/apache/yetus/pull/86
because what I committed originally, and amended in a subsequent
ADDENDUM is not taking effect.
Set the fork count for first and second parts to be 0.5C. Add a bit of
doc too on this as well as some qualification on our test categories.
Also adds -T0.5C to MAVEN_ARGS in the hbase personality.
1. Survive flakey rerunning by converting the static BeforeClass stuff
into instance-level Before.
2. Break the test method into two, one for running over each of the
snapshot manifest versions.
Signed-off-by: stack <stack@apache.org>
This is causing me issues with parallel test runs.
Also allow setting the surefire reports and temp directories via command line.
Signed-off-by: stack <stack@apache.org>
I saw this over on
https://builds.apache.org/view/H-L/view/HBase/job/HBase%20Nightly/job/branch-2/2447/console. Looks
like we need to bump the memory allocation for maven. I wonder if this
is the underlying cause of HBASE-22470.
```
6:38:47 ============================================================================
16:38:47 ============================================================================
16:38:47 Finished build.
16:38:47 ============================================================================
16:38:47 ============================================================================
16:38:47
16:38:47
Post stage
[Pipeline] stash
16:38:48 Warning: overwriting stash 'hadoop2-result'
16:38:48 Stashed 1 file(s)
[Pipeline] junit
16:38:48 Recording test results
16:38:54 Remote call on H2 failed
Error when executing always post condition:
java.io.IOException: Remote call on H2 failed
at hudson.remoting.Channel.call(Channel.java:963)
at hudson.FilePath.act(FilePath.java:1072)
at hudson.FilePath.act(FilePath.java:1061)
at hudson.tasks.junit.JUnitParser.parseResult(JUnitParser.java:114)
at hudson.tasks.junit.JUnitResultArchiver.parse(JUnitResultArchiver.java:137)
at hudson.tasks.junit.JUnitResultArchiver.parseAndAttach(JUnitResultArchiver.java:167)
at hudson.tasks.junit.pipeline.JUnitResultsStepExecution.run(JUnitResultsStepExecution.java:52)
at hudson.tasks.junit.pipeline.JUnitResultsStepExecution.run(JUnitResultsStepExecution.java:25)
at org.jenkinsci.plugins.workflow.steps.SynchronousNonBlockingStepExecution.lambda$start$0(SynchronousNonBlockingStepExecution.java:47)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.OutOfMemoryError: Java heap space
at com.sun.org.apache.xerces.internal.util.XMLStringBuffer.append(XMLStringBuffer.java:208)
at com.sun.org.apache.xerces.internal.impl.XMLEntityScanner.scanData(XMLEntityScanner.java:1515)
at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanCDATASection(XMLDocumentFragmentScannerImpl.java:1654)
at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl$FragmentContentDriver.next(XMLDocumentFragmentScannerImpl.java:3014)
at com.sun.org.apache.xerces.internal.impl.XMLDocumentScannerImpl.next(XMLDocumentScannerImpl.java:602)
at com.sun.org.apache.xerces.internal.impl.XMLNSDocumentScannerImpl.next(XMLNSDocumentScannerImpl.java:112)
at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanDocument(XMLDocumentFragmentScannerImpl.java:505)
at com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:842)
at com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:771)
at com.sun.org.apache.xerces.internal.parsers.XMLParser.parse(XMLParser.java:141)
at com.sun.org.apache.xerces.internal.parsers.AbstractSAXParser.parse(AbstractSAXParser.java:1213)
at com.sun.org.apache.xerces.internal.jaxp.SAXParserImpl$JAXPSAXParser.parse(SAXParserImpl.java:643)
at org.dom4j.io.SAXReader.read(SAXReader.java:465)
at org.dom4j.io.SAXReader.read(SAXReader.java:343)
at hudson.tasks.junit.SuiteResult.parse(SuiteResult.java:178)
at hudson.tasks.junit.TestResult.parse(TestResult.java:348)
at hudson.tasks.junit.TestResult.parsePossiblyEmpty(TestResult.java:281)
at hudson.tasks.junit.TestResult.parse(TestResult.java:206)
at hudson.tasks.junit.TestResult.parse(TestResult.java:178)
at hudson.tasks.junit.TestResult.<init>(TestResult.java:143)
at hudson.tasks.junit.JUnitParser$ParseResultCallable.invoke(JUnitParser.java:146)
at hudson.tasks.junit.JUnitParser$ParseResultCallable.invoke(JUnitParser.java:118)
at hudson.FilePath$FileCallableWrapper.call(FilePath.java:3052)
at hudson.remoting.UserRequest.perform(UserRequest.java:212)
at hudson.remoting.UserRequest.perform(UserRequest.java:54)
at hudson.remoting.Request$2.run(Request.java:369)
at hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorService.java:72)
... 4 more
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // node
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
16:38:54 Failed in branch yetus jdk8 hadoop2 checks
```
Signed-off-by: stack <stack@apache.org>
These classifications come of running at various fork counts.. A test
may complete quick if low fork count but if it is accessing disk, it
will run much slower if fork count is high. This edit accommodates
some of this phenomenon.
Signed-off-by: Bharath Vissapragada <bharathv@apache.org>
Signed-off-by: Viraj Jasani <vjasani@apache.org>
Signed-off-by: Jan Hentschel <janh@apache.org>
Codify building the summary of what's new on a release line
branch (i.e., `branch-2`), but not yet released on earlier release
branches of that line.
Builds a cvs report that looks like https://home.apache.org/~ndimiduk/new_for_branch-2.csv
The Hadoop AccessControlList allows us to specify admins of the webUI
via a list of users and/or groups. Admins of the WebUI can mutate the
system, potentially seeing sensitive data or modifying the system.
hbase.security.authentication.spnego.admin.users is a comma-separated
list of users who are admins.
hbase.security.authentication.spnego.admin.groups is a comma-separated
list of groups whose membership are admins. Either of these
configuration properties may also contain an asterisk (*) which denotes
"any entity" (e.g user, group).
Previously, when a user was denied from some endpoint that was
designated for admins, they received an HTTP/401. In this case, it is
more correct to return HTTP/403 as they were correctly authenticated,
but they were disallowed from fetching the given resource. This commit
incorporates this change.
hbase.security.authentication.ui.config.protected also exists for users
who have sensitive information stored in the Hadoop service
configuration and want to limit access to this endpoint. By default,
the Hadoop configuration endpoint is not protected and any
authenticated user can access it.
The test is based off of work by Nihal Jain in HBASE-20472.
Co-authored-by: Nihal Jain <nihaljain.cs@gmail.com>
Signed-off-by: Sean Busbey <busbey@apache.org>
Signed-off-by: Bharath Vissapragada <bharathv@apache.org>
hbase-client/src/main/java/org/apache/hadoop/hbase/HRegionInfo.java
hbase-server/src/main/java/org/apache/hadoop/hbase/executor/EventHandler.java
Complains about mismatch in types when Compare. Implement Compare in
base Interface.
hbase-client/src/main/java/org/apache/hadoop/hbase/protobuf/ProtobufUtil.java
Complains pbs never return null.
hbase-server/src/test/java/org/apache/hadoop/hbase/replication/regionserver/TestReplicationSinkManager.java
Needed redo because errorprone complains can't mock Service from guava.
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestRegionReplicasWithRestartScenarios.java
hbase-server/src/test/java/org/apache/hadoop/hbase/security/access/TestSnapshotScannerHDFSAclController.java
Unrelated...adding one-liner debug statements chasing other test
failures.
* HBASE-22853 Git/Jira Release Audit Tool
This is an application for performing an audit between the histories
on our git branches and the `fixVersion` field set on issues in
JIRA. It does this by building a Sqlite database from the commits
found on each git branch, identifying Jira IDs and release tags, and
then requesting information about those issues from Jira. Once both
sources have been collected, queries can be performed against the
database to look for discrepancies between the sources of truth (and,
possibly, bugs in this script).
Signed-off-by: Sean Busbey <busbey@apache.org>
Signed-off-by: Josh Elser <elserj@apache.org>
Signed-off-by: Viraj Jasani <vjasani@apache.org>
HBASE-21345 - [hbck2] Allow version check to proceed even though master is 'initializing'.
Just remove the check state from the getClusterStatus call.
Signed-off-by: Michael Stack <stack@apache.org>
Signed-off-by: Peter Somogyi <psomogyi@apache.org>
Signed-off-by: Sakthi <sakthi@apache.org>
(cherry picked from commit dd8496a546)