Commit Graph

54 Commits

Author SHA1 Message Date
Armin Braun f7d71b5930
Fix GCS Mock Range Downloads (#52804) (#52830)
We were not correctly respecting the download range which lead
to the GCS SDK client closing the connection at times.
Also, fixes another instance of failing to drain the request fully before sending the response headers.

Closes #51446
2020-02-26 18:38:02 +01:00
Armin Braun 3be70f64d8
Fix GCS Mock Http Handler JDK Bug (#51933) (#51941)
There is an open JDK bug that is causing an assertion in the JDK's
http server to trip if we don't drain the request body before sending response headers.
See https://bugs.openjdk.java.net/browse/JDK-8180754
Working around this issue here by always draining the request at the beginning of the handler.

Fixes #51446
2020-02-05 15:37:06 +01:00
Armin Braun 7914c1a734
Optimize GCS Mock (#51593) (#51594)
This test was still very GC heavy in Java 8 runs in particular
which seems to slow down request processing to the point of timeouts
in some runs.
This PR completely removes the large number of O(MB) `byte[]` allocations
that were happening in the mock http handler which cuts the allocation rate
by about a factor of 5 in my local testing for the GC heavy `testSnapshotWithLargeSegmentFiles`
run.

Closes #51446
Closes #50754
2020-01-29 11:06:05 +01:00
Armin Braun a725896c92
Fix and Reenable SnapshotTool Minio Tests (#50736) (#50745)
This solves half of the problem in #46813 by moving the S3
tests to using the shared minio fixture so we at least have
some non-3rd-party, constantly running coverage on these tests.
2020-01-08 16:33:36 +01:00
Armin Braun d0d48311f4
Faster and Simpler GCS REST Mock (#50706) (#50707)
* Faster and Simpler GCS REST Mock

I reworked the GCS mock a little to use less copying+allocation,
log the full request body on failure to read a multi-part request
and generally be a little simpler and easy to follow to track down
the remaining issues that are causing almost daily failures from this
class's multi-part request parsing that can't be reproduced locally.
2020-01-07 20:17:46 +01:00
Armin Braun 72a405fafb
Fix GCS Mock Broken Handling of some Blobs (#50666) (#50671)
* Fix GCS Mock Broken Handling of some Blobs

We were incorrectly handling blobs starting in `\r\n` which broke
tests randomly when blobs started on these.

Relates #49429
2020-01-06 19:27:57 +01:00
Armin Braun c73930988b
Remove Unused Delete Endpoint from GCS Mock (#50128) (#50134)
Follow up to #50024: we're not using the single-delete any more so no need to have a mock endpoint for it
2019-12-12 14:18:06 +01:00
Armin Braun 0fae4065ef
Better Logging GCS Blobstore Mock (#50102) (#50124)
* Better Logging GCS Blobstore Mock

Two things:
1. We should just throw a descriptive assertion error and figure out why we're not reading a multi-part instead of
returning a `400` and failing the tests that way here since we can't reproduce these 400s locally.
2. We were missing logging the exception on a cleanup delete failure that coincides with the `400` issue in tests.

Relates #49429
2019-12-12 11:17:22 +01:00
Armin Braun d19c8db4e4
Fix GCS Mock Batch Delete Behavior (#50034) (#50084)
Batch deletes get a response for every delete request, not just those that actually hit an existing blob.
The fact that we only responded for existing blobs leads to a degenerate response that throws a parse exception if a batch delete only contains non-existant blobs.
2019-12-11 17:40:25 +01:00
Armin Braun 90e9d61f2b
Optimize GoogleCloudStorageHttpHandler (#49677) (#49707)
Removing a lot of needless buffering and array creation
to reduce the significant memory usage of tests using this.
The incoming stream from the `exchange` is already buffered
so there is no point in adding a ton of additional buffers
everywhere.
2019-11-29 11:17:47 +01:00
Armin Braun 495b543e63
Improve Stability of GCS Mock API (#49592) (#49597)
Same as #49518 pretty much but for GCS.
Fixing a few more spots where input stream can get closed
without being fully drained and adding assertions to make sure
it's always drained.
Moved the no-close stream wrapper to production code utilities since
there's a number of spots in production code where it's also useful
(will reuse it there in a follow-up).
2019-11-26 16:53:51 +01:00
Armin Braun a5fa86ed97
Improve Stability of Mock APIs (#49518) (#49524)
This commit ensures that even for requests that are known to be empty body
we at least attempt to read one bytes from the request body input stream.
This is done to work around the behavior in `sun.net.httpserver.ServerImpl.Dispatcher#handleEvent`
that will close a TCP/HTTP connection that does not have the `eof` flag (see `sun.net.httpserver.LeftOverInputStream#isEOF`)
set on its input stream. As far as I can tell the only way to set this flag is to do a read when there's no more bytes buffered.
This fixes the numerous connection closing issues because the `ServerImpl` stops closing connections that it thinks
weren't fully drained.

Also, I removed a now redundant drain loop in the Azure handler as well as removed the connection closing in the error handler's
drain action (this shouldn't have an effect but makes things more predictable/easier to reason about IMO).

I would suggest merging this and closing related issue after verifying that this fixes things on CI.

The way to locally reproduce the issues we're seeing in tests is to make the retry timings more aggressive in e.g. the azure tests
and move them to single digit values. This makes the retries happen quickly enough that they run into the async connecting closing
of allegedly non-eof connections by `ServerImpl` and produces the exact kinds of failures we're seeing currently.

Relates #49401, #49429
2019-11-25 10:28:55 +01:00
Armin Braun 231d079bf8
Fix Azure Mock Issues (#49377) (#49381)
Fixing a few small issues found in this code:
1. We weren't reading the request headers but the response headers when checking for blob existence in the mocked single upload path
2. Error code can never be `null` removed the dead code that resulted
3. In the logging wrapper we weren't checking for `Throwable` so any failing assertions in the http mock would not show up since they
run on a thread managed by the mock http server
2019-11-21 19:57:50 +01:00
Tanguy Leroux f753fa2265 HttpHandlers should return correct list of objects (#49283)
This commit fixes the server side logic of "List Objects" operations
of Azure and S3 fixtures. Until today, the fixtures were returning a "
flat" view of stored objects and were not correctly handling the
delimiter parameter. This causes some objects listing to be wrongly
interpreted by the snapshot deletion logic in Elasticsearch which
relies on the ability to list child containers of BlobContainer (#42653)
to correctly delete stale indices.

As a consequence, the blobs were not correctly deleted from the
 emulated storage service and stayed in heap until they got garbage
collected, causing CI failures like #48978.

This commit fixes the server side logic of Azure and S3 fixture when
listing objects so that it now return correct common blob prefixes as
expected by the snapshot deletion process. It also adds an after-test
check to ensure that tests leave the repository empty (besides the
root index files).

Closes #48978
2019-11-20 09:26:42 +01:00
Tanguy Leroux ca4f55f2e4
Add docker-compose fixtures for S3 integration tests (#49107) (#49229)
Similarly to what has been done for Azure (#48636) and GCS (#48762),
this committ removes the existing Ant fixture that emulates a S3 storage
service in favor of multiple docker-compose based fixtures.

The goals here are multiple: be able to reuse a s3-fixture outside of the
repository-s3 plugin; allow parallel execution of integration tests; removes
the existing AmazonS3Fixture that has evolved in a weird beast in
dedicated, more maintainable fixtures.

The server side logic that emulates S3 mostly comes from the latest
HttpHandler made for S3 blob store repository tests, with additional
features extracted from the (now removed) AmazonS3Fixture:
authentication checks, session token checks and improved response
errors. Chunked upload request support for S3 object has been added
too.

The server side logic of all tests now reside in a single S3HttpHandler class.

Whereas AmazonS3Fixture contained logic for basic tests, session token
tests, EC2 tests or ECS tests, the S3 fixtures are now dedicated to each
kind of test. Fixtures are inheriting from each other, making things easier
to maintain.
2019-11-18 05:56:59 -05:00
Rory Hunter c46a0e8708
Apply 2-space indent to all gradle scripts (#49071)
Backport of #48849. Update `.editorconfig` to make the Java settings the
default for all files, and then apply a 2-space indent to all `*.gradle`
files. Then reformat all the files.
2019-11-14 11:01:23 +00:00
Tanguy Leroux 20fc1dbe18
Move MinIO fixture in its own project (#49036)
This commit moves the MinIO docker-compose fixture from the
:plugins:repository-s3 to its own :test:minio-fixture Gradle project.
2019-11-13 10:03:59 -05:00
Tanguy Leroux 8a14ea5567
Add docker-composed based test fixture for GCS (#48902)
Similarly to what has be done for Azure in #48636, this commit
adds a new :test:fixtures:gcs-fixture project which provides two
docker-compose based fixtures that emulate a Google Cloud
Storage service.

Some code has been extracted from existing tests and placed
into this new project so that it can be easily reused in other
projects.
2019-11-07 13:27:22 -05:00
Tanguy Leroux 989467ca1e
Add docker-compose based test fixture for Azure (#48736)
This commit adds a new :test:fixtures:azure-fixture project which 
provides a docker-compose based container that runs a AzureHttpFixture 
Java class that emulates an Azure Storage service.

The logic to emulate the service is extracted from existing tests and 
placed in AzureHttpHandler into the new project so that it can be 
easily reused. The :plugins:repository-azure project is an example 
of such utilization.

The AzureHttpFixture fixture is just a wrapper around AzureHttpHandler 
and is now executed within the docker container. 

The :plugins:repository-azure:qa:microsoft-azure project uses the new 
test fixture and the existing AzureStorageFixture has been removed.
2019-10-31 10:43:43 +01:00
Yogesh Gaikwad 91c342a888
fix and enable repository-hdfs secure tests (#44044) (#44199)
Due to recent changes are done for converting `repository-hdfs` to test
clusters (#41252), the `integTestSecure*` tasks did not depend on
`secureHdfsFixture` which when running would fail as the fixture
would not be available. This commit adds the dependency of the fixture
to the task.

The `secureHdfsFixture` is a `AntFixture` which is spawned a process.
Internally it waits for 30 seconds for the resources to be made available.
For my local machine, it took almost 45 seconds to be available so I have
added the wait time as an input to the `AntFixture` defaults to 30 seconds
 and set it to 60 seconds in case of secure hdfs fixture.

The integ test for secure hdfs was disabled for a long time and so
the changes done in #42090 to fix the tests are also done in this commit.
2019-07-12 12:44:01 +10:00
Alpar Torok 0c8294e633 Make sure the clean task doesn't break test fixtures (#43641)
Use a dedicated fixture dir.
2019-07-08 17:58:27 +03:00
Alpar Torok 7cc6dca697 Remove explicily enabled build fixture task 2019-06-14 10:42:08 +03:00
Yogesh Gaikwad 4ae1e30a98
Enable krb5kdc-fixture, kerberos tests mount urandom for kdc container (#41710) (#43178)
Infra has fixed #10462 by installing `haveged` on CI workers.
This commit enables the disabled fixture and tests, and mounts
`/dev/urandom` for the container so there is enough
entropy required for kdc.
Note: hdfs-repository tests have been disabled, will raise a separate issue for it.

Closes #40624 Closes #40678
2019-06-13 13:02:16 +10:00
Mark Vieira 1287c7d91f
[Backport] Replace usages RandomizedTestingTask with built-in Gradle Test (#40978) (#40993)
* Replace usages RandomizedTestingTask with built-in Gradle Test (#40978)

This commit replaces the existing RandomizedTestingTask and supporting code with Gradle's built-in JUnit support via the Test task type. Additionally, the previous workaround to disable all tasks named "test" and create new unit testing tasks named "unitTest" has been removed such that the "test" task now runs unit tests as per the normal Gradle Java plugin conventions.

(cherry picked from commit 323f312bbc829a63056a79ebe45adced5099f6e6)

* Fix forking JVM runner

* Don't bump shadow plugin version
2019-04-09 11:52:50 -07:00
Alpar Torok 293297ae3d Fix repository-hdfs when no docker and unnecesary fixture
The hdfs-fixture is actually executed in plugin/repository-hdfs as a
dependency. The fixture is not needed and actually causes a failure
because we have two copies now and both use the same ports.
2019-03-29 16:55:12 +02:00
Alpar Torok 2b91fb1cc0 Avoid building hdfs-fixure use an image that works instead
Avoid the additional requirement for the debian package repos to be up,
and depend on dockerhub only instead.
2019-03-29 16:55:11 +02:00
Alpar Torok e8c0b53796 Add ability to mute and mute flaky fixture (#40630) 2019-03-29 12:10:04 +02:00
Alpar Torok d791e08932 Test fixtures krb5 (#40297)
Replaces the vagrant based kerberos fixtures with docker based test fixtures plugin.
The configuration is now entirely static on the docker side and no longer driven by Gradle,
also two different services are being configured since there are two different consumers of the fixture that can run in parallel and require different configurations.
2019-03-28 17:26:58 +02:00
Alpar Torok e9ef5bdce8
Converting randomized testing to create a separate unitTest task instead of replacing the builtin test task (#36311)
- Create a separate unitTest task instead of Gradle's built in 
- convert all configuration to use the new task 
- the  built in task is now disabled
2018-12-19 08:25:20 +02:00
Tim Brooks aaf466ff5e
Revert transport.port change for tests (#36809)
Commit #36786 updated docs and strings to reference transport.port instead of
transport.tcp.port. However, this breaks backwards compatibility tests
as the tests rely on string configurations and transport.port does not
exist prior to 6.6. This commit reverts the places were we reference
transport.tcp.port for tests. This work will need to be reintroduced in
a backwards compatible way.
2018-12-18 19:01:13 -07:00
Tim Brooks 47a9a8de49
Update transport docs and settings for changes (#36786)
This is related to #36652. In 7.0 we plan to deprecate a number of
settings that make reference to the concept of a tcp transport. We
mostly just have a single transport type now (based on tcp). Settings
should only reference tcp if they are referring to socket options. This
commit updates the settings in the docs. And removes string usages of
the old settings. Additionally it adds a missing remote compress setting
to the docs.
2018-12-18 13:09:58 -07:00
Alpar Torok c00d0fc814
Test fixtures improovements (#36037)
* Upgrae plugin to latest and expose udp
* Explicit check for windows
* Rename the properties for the port numbers
* Tasks for pre and pos container actions
2018-12-12 12:00:47 +02:00
Alpar Torok 8659af68e0
Auto skip license headers on no source (#35640)
* Unmute BuildExamplePluginsIT

* Skip licenseHeaders when there are no sources
2018-11-20 13:02:33 +02:00
Ryan Ernst 0158b59a5a
Test: Fix forbidden uses in test framework (#32824)
This commit fixes existing uses of forbidden apis in the test framework
and re-enables the forbidden apis check. It was previously completely
disabled and had missed a rename of the forbidden apis signatures files.

closes #32772
2018-08-14 11:35:09 -07:00
Yogesh Gaikwad a525c36c60 [Kerberos] Add Kerberos authentication support (#32263)
This commit adds support for Kerberos authentication with a platinum
license. Kerberos authentication support relies on SPNEGO, which is
triggered by challenging clients with a 401 response with the
`WWW-Authenticate: Negotiate` header. A SPNEGO client will then provide
a Kerberos ticket in the `Authorization` header. The tickets are
validated using Java's built-in GSS support. The JVM uses a vm wide
configuration for Kerberos, so there can be only one Kerberos realm.
This is enforced by a bootstrap check that also enforces the existence
of the keytab file.

In many cases a fallback authentication mechanism is needed when SPNEGO
authentication is not available. In order to support this, the
DefaultAuthenticationFailureHandler now takes a list of failure response
headers. For example, one realm can provide a
`WWW-Authenticate: Negotiate` header as its default and another could
provide `WWW-Authenticate: Basic` to indicate to the client that basic
authentication can be used in place of SPNEGO.

In order to test Kerberos, unit tests are run against an in-memory KDC
that is backed by an in-memory ldap server. A QA project has also been
added to test against an actual KDC, which is provided by the krb5kdc
fixture.

Closes #30243
2018-07-24 08:44:26 -06:00
Tanguy Leroux bbfe1eccc7
[Tests] Mutualize fixtures code in BaseHttpFixture (#31210)
Many fixtures have similar code for writing the pid & ports files or
for handling HTTP requests. This commit adds an AbstractHttpFixture 
class in the test framework that can be extended for specific testing purposes.
2018-06-14 14:09:56 +02:00
Tanguy Leroux 8b4d80ad09
Fix AntFixture waiting condition (#31272)
The AntFixture waiting condition is evaluated to false 
but it should be true.
2018-06-13 12:40:22 +02:00
Tanguy Leroux bf58660482
Remove all unused imports and fix CRLF (#31207)
The X-Pack opening and the recent other refactorings left a lot of 
unused imports in the codebase. This commit removes them all.
2018-06-11 15:12:12 +02:00
Jason Tedor 65c107b47d
Fix unknown licenses (#31223)
The goal of this commit is to address unknown licenses when producing
the dependencies info report. We have two different checks that we run
on licenses. The first check is whether or not we have stashed a copy of
the license text for a dependency in the repository. The second is to
map every dependency to a license type (e.g., BSD 3-clause). The problem
here is that the way we were handling licenses in the second check
differs from how we handle licenses in the first check. The first check
works by finding a license file with the name of the artifact followed
by the text -LICENSE.txt. Yet in some cases we allow mapping an artifact
name to another name used to check for the license (e.g., we map
lucene-.* to lucene, and opensaml-.* to shibboleth. The second check
understood the first way of looking for a license file but not the
second way. So in this commit we teach the second check about the
mappings from artifact names to license names. We do this by copying the
configuration from the dependencyLicenses task to the dependenciesInfo
task and then reusing the code from the first check in the second
check. There were some other challenges here though. For example,
dependenciesInfo was checking too many dependencies. For now, we should
only be checking direct dependencies and leaving transitive dependencies
from another org.elasticsearch artifact to that artifact (we want to do
this differently in a follow-up). We also want to disable
dependenciesInfo for projects that we do not publish, users only care
about licenses they might be exposed to if they use our assembled
products. With all of the changes in this commit we have eliminated all
unknown licenses. A follow-up will enforce that when we add a new
dependency it does not get mapped to unknown, these will be forbidden in
the future. Therefore, with this change and earlier changes are left
having no unknown licenses and two custom licenses; custom here means it
does not map to an SPDX license type. Those two licenses are xz and
ldapsdk. A future change will not allow additional custom licenses
unless they are explicitly whitelisted. This ensures that if a new
dependency is added it is mapped to an SPDX license or mapped to custom
because it does not have an SPDX license.
2018-06-09 07:28:41 -04:00
James Baiera e16f1271b6
Fix SecurityException when HDFS Repository used against HA Namenodes (#27196)
* Sense HA HDFS settings and remove permission restrictions during regular execution.

This PR adds integration tests for HA-Enabled HDFS deployments, both regular and secured. 
The Mini HDFS fixture has been updated to optionally run in HA-Mode. A new test suite has 
been added for reproducing the effects of a Namenode failing over during regular repository 
usage. Going forward, the HDFS Repository will still be subject to its self imposed permission 
restrictions during normal use, but will no longer restrict them when running against an HA 
enabled HDFS cluster. Instead, the plugin will rely on the provided security policy and not 
further restrict the permissions so that the transparent operation to failover to a different 
Namenode in the client does not raise security exceptions. Additionally, we are now testing the 
secure mode with SASL based wire encryption of data between Elasticsearch and HDFS. This 
includes a missing library (commons codec) in order to support this change.
2017-12-01 14:26:05 -05:00
James Baiera c760eec054 Add permission checks before reading from HDFS stream (#26716)
Add checks for special permissions before reading hdfs stream data. Also adds test from 
readonly repository fix. MiniHDFS will now start with an existing repository with a single snapshot 
contained within. Readonly Repository is created in tests and attempts to list the snapshots 
within this repo.
2017-09-21 11:55:07 -04:00
James Baiera 74f4a14d82 Upgrading HDFS Repository Plugin to use HDFS 2.8.1 Client (#25497)
Hadoop 2.7.x libraries fail when running on JDK9 due to the version string changing to a single 
character. On Hadoop 2.8, this is no longer a problem, and it is unclear on whether the fix will be 
backported to the 2.7 branch. This commit upgrades our dependency of Hadoop for the HDFS 
Repository to 2.8.1.
2017-06-30 17:57:56 -04:00
Nik Everett 21b1db2965 Remove assemble from build task when assemble removed
Removes the `assemble` task from the `build` task when we have
removed `assemble` from the project. We removed `assemble` from
projects that aren't published so our releases will be faster. But
That broke CI because CI builds with `gradle precommit build` and,
it turns out, that `build` includes `check` and `assemble`. With
this change CI will only run `check` for projects without an
`assemble`.
2017-06-16 17:19:14 -04:00
Nik Everett 7b358190d6 Remove assemble task when not used for publishing (#25228)
Removes the `assemble` task from projects that are not published.
This should speed up `gradle assemble` by skipping projects that
don't need to be built. Which is useful because `gradle assemble`
is how we cut releases.
2017-06-16 11:46:34 -04:00
Nik Everett 8188569fd1 Add qa module that tests reindex-from-remote against pre-5.0 versions of Elasticsearch (#24561)
Adds tests for reindex-from-remote for the latest 2.4, 1.7, and
0.90 releases. 2.4 and 1.7 are fairly popular versions but 0.90
is a point of pride.

This fixes any issues those tests revealed.

Closes #23828
Closes #24520
2017-05-11 10:06:20 -04:00
James Baiera 6a113ae499 Introduce Kerberos Test Fixture for Repository HDFS Security Tests (#24493)
This PR introduces a subproject in test/fixtures that contains a Vagrantfile used for standing up a 
KRB5 KDC (Kerberos). The PR also includes helper scripts for provisioning principals, a few 
changes to the HDFS Fixture to allow it to interface with the KDC, as well as a new suite of 
integration tests for the HDFS Repository plugin.

The HDFS Repository plugin senses if the local environment can support the HDFS Fixture 
(Windows is generally a restricted environment). If it can use the regular fixture, it then tests if 
Vagrant is installed with a compatible version to determine if the secure test fixtures should be 
enabled. If the secure tests are enabled, then we create a Kerberos KDC fixture, tasks for adding 
the required principals, and an HDFS fixture configured for security. A new integration test task is 
also configured to use the KDC and secure HDFS fixture and to run a testing suite that uses 
authentication. At the end of the secure integration test the fixtures are torn down.
2017-05-10 17:42:20 -04:00
Robert Muir f67390e0c8 in the plugin: guard against HADOOP_HOME in environment on any platform.
hdfs fixture: minihdfs works on windows now, if things are properly set
but our test fixture still cannot launch this on windows.
2015-12-21 02:21:53 -05:00
Robert Muir e93c491dbe simplify hdfs fixture 2015-12-20 23:50:27 -05:00
Robert Muir 99f2cde225 Fail fast if HDFS cluster shuts itself down 2015-12-20 22:30:41 -05:00
Robert Muir f4f8b6e3fe Merge branch 'master' of github.com:elastic/elasticsearch into hdfs2-only 2015-12-20 21:59:02 -05:00