Commit Graph

2554 Commits

Author SHA1 Message Date
Armin Braun 60b6d4eddc
Increase Timeout in S3 Cooldown Test (#56267) (#56323)
Moving from `5s` to `10s` here because of #56095.
This adds `10s` to the overall runtime of the test which should be
a reasonable tradeoff for stability.

Closes #56095
2020-05-07 11:23:07 +02:00
Jason Tedor 33669c0420
Upgrade to Jackson 2.10.4 (#56188)
Another Jackson release is available. There are some CVEs addressed,
none of which impact us, but since we can now bump Jackson easily, let
us move along with the train to avoid the false positives from security
scanners.
2020-05-06 17:20:23 -04:00
Julie Tibshirani e852bb29b7
Simplify signature of FieldMapper#parseCreateField. (#56144)
`FieldMapper#parseCreateField` accepts the parse context, plus a list of fields
as an output parameter. These fields are immediately added to the document
through `ParseContext#doc()`.

This commit simplifies the signature by removing the list of fields, and having
the mappers add the fields directly to `ParseContext#doc()`. I think this is
nicer for implementors, because previously fields could be added either through
the list, or the context (through `add`, `addWithKey`, etc.)
2020-05-06 11:12:09 -07:00
Tim Brooks 6a51017cb2
Upgrade netty to 4.1.49.Final (#56059) 2020-05-05 10:40:23 -06:00
Armin Braun 3a64ecb6bf
Allow Deleting Multiple Snapshots at Once (#55474) (#56083)
* Allow Deleting Multiple Snapshots at Once (#55474)

Adds deleting multiple snapshots in one go without significantly changing the mechanics of snapshot deletes otherwise.
This change does not yet allow mixing snapshot delete and abort. Abort is still only allowed for a single snapshot delete by exact name.
2020-05-03 20:30:58 +02:00
Tim Brooks 80662f31a1
Introduce mechanism to stub request handling (#55832)
Currently there is a clear mechanism to stub sending a request through
the transport. However, this is limited to testing exceptions on the
sender side. This commit reworks our transport related testing
infrastructure to allow stubbing request handling on the receiving side.
2020-04-27 16:57:15 -06:00
Rory Hunter d66af46724
Always use deprecateAndMaybeLog for deprecation warnings (#55319)
Backport of #55115.

Replace calls to deprecate(String,Object...) with deprecateAndMaybeLog(...),
with an appropriate key, so that all messages can potentially be deduplicated.
2020-04-23 09:20:54 +01:00
Armin Braun db7eb8e8ff
Remove Redundant CS Update on Snapshot Finalization (#55276) (#55528)
This change folds the removal of the in-progress snapshot entry
into setting the safe repository generation. Outside of removing
an unnecessary cluster state update, this also has the advantage
of removing a somewhat inconsistent cluster state where the safe
repository generation points at `RepositoryData` that contains a
finished snapshot while it is still in-progress in the cluster
state, making it easier to reason about the state machine of
upcoming concurrent snapshot operations.
2020-04-21 15:33:17 +02:00
Yannick Welsch ba39c261e8 Use streaming reads for GCS (#55506)
To read from GCS repositories we're currently using Google SDK's official BlobReadChannel,
which issues a new request every 2MB (default chunk size for BlobReadChannel) using range
requests, and fully downloads the chunk before exposing it to the returned InputStream. This
means that the SDK issues an awfully high number of requests to download large blobs.
Increasing the chunk size is not an option, as that will mean that an awfully high amount of
heap memory will be consumed by the download process.

The Google SDK does not provide the right abstractions for a streaming download. This PR
uses the lower-level primitives of the SDK to implement a streaming download, similar to what
S3's SDK does.

Also closes #55505
2020-04-21 13:22:26 +02:00
Ignacio Vera 4783f1894c
mute test testReadRangeBlobWithRetries (#55507) (#55508) 2020-04-21 10:59:35 +02:00
Yannick Welsch b9da307cd1 Add GCS support for searchable snapshots (#55403)
Adds ranged read support for GCS repositories in order to enable searchable snapshot support
for GCS.

As part of this PR, I've extracted some of the test infrastructure to make sure that
GoogleCloudStorageBlobContainerRetriesTests and S3BlobContainerRetriesTests are covering
similar test (as I saw those diverging in what they cover)
2020-04-20 13:02:59 +02:00
Armin Braun 5550d8f3f6
Fix Path Style Access Setting Priority (#55439) (#55444)
* Fix Path Style Access Setting Priority

Fixing obvious bug in handling path style access if it's the only setting overridden by the
repository settings.

Closes #55407
2020-04-20 11:47:41 +02:00
Jason Tedor 0a1b566c65
Fix security manager bug writing large blobs to GCS (#55421)
* Fix security manager bug writing large blobs to GCS

This commit addresses a security manager permissions issue writing large
blobs (on the resumable upload path) to GCS. The underlying issue here
is that we need to wrap the close and write calls on the channel. It is
not enough to do this:

SocketAccess.doPrivilegedVoidIOException(
  () -> Streams.copy(
    inputStream,
    Channels.newOutputStream(client().writer(blobInfo, writeOptions))));

This reason that this is not enough is because Streams#copy will be in
the stacktrace and it is not granted the security manager permissions
needed to close or write this channel. We only grant those permissions
to classes loaded in the plugin classloader, and Streams#copy is from
the parent classloader. This is why we must wrap the close and write
calls as privileged, to truncate the Streams#copy call out of the
stacktrace.

The reason that this issue is not caught in testing is because the size
of data that we use in testing is too small to trigger the large blob
resumable upload path. Therefore, we address this by adding a system
property to control the threshold, which we can then set in tests to
exercise this code path. Prior to rewriting the writeBlobResumable
method to wrap the close and write calls as privileged, with this
additional test, we are able to reproduce the security manager
permissions issue. After adding the wrapping, this test now passes.

* Fix forbidden APIs issue

* Remove leftover debugging
2020-04-17 18:49:10 -04:00
William Brafford 49e30b15a2
Deprecate disabling basic-license features (#54816) (#55405)
We believe there's no longer a need to be able to disable basic-license
features completely using the "xpack.*.enabled" settings. If users don't
want to use those features, they simply don't need to use them. Having
such features always available lets us build more complex features that
assume basic-license features are present.

This commit deprecates settings of the form "xpack.*.enabled" for
basic-license features, excluding "security", which is a special case.
It also removes deprecated settings from integration tests and unit
tests where they're not directly relevant; e.g. monitoring and ILM are
no longer disabled in many integration tests.
2020-04-17 15:04:17 -04:00
Armin Braun 73ab3719e8
Mute GCS Retry Tests on JDK8 (#55372)
Same as #53119 but for the retries tests.
Closes #55317
2020-04-17 12:19:35 +02:00
William Brafford 2ba3be9db6
Remove deprecated third-party methods from tests (#55255) (#55269)
I've noticed that a lot of our tests are using deprecated static methods
from the Hamcrest matchers. While this is not a big deal in any
objective sense, it seems like a small good thing to reduce compilation
warnings and be ready for a new release of the matcher library if we
need to upgrade. I've also switched a few other methods in tests that
have drop-in replacements.
2020-04-15 17:54:47 -04:00
Ryan Ernst 29b70733ae
Use task avoidance with forbidden apis (#55034)
Currently forbidden apis accounts for 800+ tasks in the build. These
tasks are aggressively created by the plugin. In forbidden apis 3.0, we
will get task avoidance
(https://github.com/policeman-tools/forbidden-apis/pull/162), but we
need to ourselves use the same task avoidance mechanisms to not trigger
these task creations. This commit does that for our foribdden apis
usages, in preparation for upgrading to 3.0 when it is released.
2020-04-15 13:27:53 -07:00
Ignacio Vera a677b63daa
Upgrade to lucene 8.5.1 release (#55229) (#55235)
Upgrade to lucene 8.5.1 release that contains a bug fix for a bug that might introduce index corruption when deleting data from an index that was previously shrunk.
2020-04-15 17:35:42 +02:00
Armin Braun 2f91e2aab7
Fix Race in Snapshot Abort (#54873) (#55233)
We can be a little more efficient when aborting a snapshot. Since we know the new repository
data after finalizing the aborted snapshot when can pass it down to the snapshot completion listeners.
This way, we don't have to fork off to the snapshot threadpool to get the repository data when the listener completes and can directly submit the delete task with high priority straight from the cluster state thread.
2020-04-15 15:42:15 +02:00
Mark Vieira ce85063653
[7.x] Re-add origin url information to publish POM files (#55173) 2020-04-14 13:24:15 -07:00
Yannick Welsch a610513ec7 Provide repository-level stats for searchable snapshots (#55051)
Provides basic repository-level stats that will allow us to get some insight into how many
requests are actually being made by the underlying SDK. Currently only tracks GET and LIST
calls for S3 repositories. Most of the code is unfortunately boiler plate to add a new endpoint
that will help us better understand some of the low-level dynamics of searchable snapshots.
2020-04-14 14:34:08 +02:00
Jake Landis a2fafa6af4
[7.x] Lazy test cluster module and plugins (#54852) (#55087)
This change converts the module and plugin parameters
for testClusters to be lazy. Meaning that the values
are not resolved until they are actually used. This
removes the requirement to use project.afterEvaluate to
be able to resolve the bundle artifact.

Note - this does not completely remove the need for afterEvaluate
since it is still needed for the custom resource extension.
2020-04-13 10:53:35 -05:00
Jason Tedor 9eeae59a83
Clarify available processors (#54907)
The use of available processors, the terminology, and the settings
around it have evolved over time. This commit cleans up some places in
the codes and in the docs to adjust to the current terminology.
2020-04-10 08:48:27 -04:00
Armin Braun f6bdd30165
Fix S3 Blob Container Retries Test Range Handling (#55000) (#55002)
The ranges in HTTP headers are using inclusive values for start and end of the range.
The math we used was off in so far that start equals end for the range resulted in length `0`
instead of the correct value of `1`.
Closes #54981
Closes #54995
2020-04-09 10:58:42 +02:00
Mark Vieira ac6d1f7b24
Mute S3BlobContainerRetriesTests.testReadRangeBlobWithRetries 2020-04-08 16:45:38 -07:00
Mark Vieira 264bfaca56
Mute S3BlobContainerRetriesTests.testReadBlobWithPrematureConnectionClose 2020-04-08 13:05:35 -07:00
Armin Braun 411dc2f607
Fix Broken Math in S3 Retries Tests (#54952) (#54972)
If we run into `length == 0` we trip an assertion in `randomIntBetween(0, length -1)`.
2020-04-08 20:32:21 +02:00
Ryan Ernst 37795d259a
Remove guava from transitive compile classpath (#54309) (#54695)
Guava was removed from Elasticsearch many years ago, but remnants of it
remain due to transitive dependencies. When a dependency pulls guava
into the compile classpath, devs can inadvertently begin using methods
from guava without realizing it. This commit moves guava to a runtime
dependency in the modules that it is needed.

Note that one special case is the html sanitizer in watcher. The third
party dep uses guava in the PolicyFactory class signature. However, only
calling a method on the PolicyFactory actually causes the class to be
loaded, a reference alone does not trigger compilation to look at the
class implementation. There we utilize a MethodHandle for invoking the
relevant method at runtime, where guava will continue to exist.
2020-04-07 23:20:17 -07:00
Tim Brooks 619028c33e
Implement transport circuit breaking in aggregator (#54927)
This commit moves the action name validation and circuit breaking into
the InboundAggregator. This work is valuable because it lays the
groundwork for incrementally circuit breaking as data is received.

This PR includes the follow behavioral change:

Handshakes contribute to circuit breaking, but cannot be broken. They
currently do not contribute nor are they broken.
2020-04-07 17:10:31 -06:00
Tim Brooks 9cf2406cf1
Move network stats marking into InboundPipeline (#54908)
This is a follow-up to #48263. It moves the inbound stats tracking
inside of the InboundPipeline.
2020-04-07 13:34:05 -06:00
Tanguy Leroux 4d36917e52
Merge feature/searchable-snapshots branch into 7.x (#54803) (#54825)
This is a backport of #54803 for 7.x.

This pull request cherry picks the squashed commit from #54803 with the additional commits:

    6f50c92 which adjusts master code to 7.x
    a114549 to mute a failing ILM test (#54818)
    48cbca1 and 50186b2 that cleans up and fixes the previous test
    aae12bb that adds a missing feature flag (#54861)
    6f330e3 that adds missing serialization bits (#54864)
    bf72c02 that adjust the version in YAML tests
    a51955f that adds some plumbing for the transport client used in integration tests

Co-authored-by: David Turner <david.turner@elastic.co>
Co-authored-by: Yannick Welsch <yannick@welsch.lu>
Co-authored-by: Lee Hinman <dakrone@users.noreply.github.com>
Co-authored-by: Andrei Dan <andrei.dan@elastic.co>
2020-04-07 13:28:53 +02:00
Jason Tedor f3a0018175
Update link to JDK 14 compiler bug
This commit updates the link to the JDK 14 compiler bug that we have
found. At the time that we committed the workaround, we had a submission
ID, but not yet the public bug URL. This commit adds the public bug URL.
2020-04-07 06:26:14 -04:00
Jason Tedor f2590b9984
Workaround JDK 14 compiler bug (#54689)
This commit workarounds a bug in the JDK 14 compiler. It is choking on a
method reference, so we substitute a lambda expression instead. The JDK
bug ID is 9064309.
2020-04-02 19:45:52 -04:00
Jason Tedor 5fcda57b37
Rename MetaData to Metadata in all of the places (#54519)
This is a simple naming change PR, to fix the fact that "metadata" is a
single English word, and for too long we have not followed general
naming conventions for it. We are also not consistent about it, for
example, METADATA instead of META_DATA if we were trying to be
consistent with MetaData (although METADATA is correct when considered
in the context of "metadata"). This was a simple find and replace across
the code base, only taking a few minutes to fix this naming issue
forever.
2020-03-31 17:24:38 -04:00
Zachary Tong c9db2de41d
[7.x] Comprehensively test supported/unsupported field type:agg combinations (#54451)
* Comprehensively test supported/unsupported field type:agg combinations (#52493)

This adds a test to AggregatorTestCase that allows us to programmatically
verify that an aggregator supports or does not support a particular
field type.  It fetches the list of registered field type parsers,
creates a MappedFieldType from the parser and then attempts to run
a basic agg against the field.

A supplied list of supported VSTypes are then compared against the
output (success or exception) and suceeds or fails the test accordingly.

Co-Authored-By: Mark Tozzi <mark.tozzi@gmail.com>
* Skip fields that are not aggregatable

* Use newIndexSearcher() to avoid incompatible readers (#52723)

Lucene's `newSearcher()` can generate readers like ParallelCompositeReader
which we can't use.  We need to instead use our helper `newIndexSearcher`
2020-03-31 14:35:03 -04:00
Martijn van Groningen 4b4fbc160d
Refactor AliasOrIndex abstraction. (#54394)
Backport of #53982

In order to prepare the `AliasOrIndex` abstraction for the introduction of data streams,
the abstraction needs to be made more flexible, because currently it really can be only
an alias or an index.

* Renamed `AliasOrIndex` to `IndexAbstraction`.
* Introduced a `IndexAbstraction.Type` enum to indicate what a `IndexAbstraction` instance is.
* Replaced the `isAlias()` method that returns a boolean with the `getType()` method that returns the new Type enum.
* Moved `getWriteIndex()` up from the `IndexAbstraction.Alias` to the `IndexAbstraction` interface.
* Moved `getAliasName()` up from the `IndexAbstraction.Alias` to the `IndexAbstraction` interface and renamed it to `getName()`.
* Removed unnecessary casting to `IndexAbstraction.Alias` by just checking the `getType()` method.

Relates to #53100
2020-03-30 10:12:16 +02:00
Tim Brooks 2ccddbfa88
Move transport decoding and aggregation to server (#54360)
Currently all of our transport protocol decoding and aggregation occurs
in the individual transport modules. This means that each implementation
(test, netty, nio) must implement this logic. Additionally, it means
that the entire message has been read from the network before the server
package receives it.

This commit creates a pipeline in server which can be passed arbitrary
bytes to handle. Internally, the pipeline will decode, decompress, and
aggregate the messages. Additionally, this allows us to run many
megabytes of bytes through the pipeline in tests to ensure that the
logic works.

This work will enable future work:

Circuit breaking or backoff logic based on message type and byte
in the content aggregator.
Sharing bytes with the application layer using the ref counted
releasable network bytes.
Improved network monitoring based specifically on channels.
Finally, this fixes the bug where we do not circuit break on the correct
message size when compression is enabled.
2020-03-27 14:13:10 -06:00
Tim Brooks f5b4020819
Remove netty BytesReference implementations (#54355)
Elasticsearch has a number of different BytesReference implementations.
These implementations can all implement the interface in different ways
with subtly different behavior and performance characteristics. On the
other-hand, the JVM only represents bytes as an array or a direct byte
buffer. This commit deletes the specialized Netty implementations and
moves to using a generic ByteBuffer reference type. This will allow us
to focus on standardizing performance and behave around a smaller number
of implementations that can be used by all components in Elasticsearch.
2020-03-27 11:01:33 -06:00
Armin Braun d9d11f6d16
Remove Unused Apache Http Dependency from GCS Repo Plugin (#54331) (#54342)
We are not using the Apache HTTP client backed http transport
with the GCS repo. Same as with the app engine type transport
we can save ourselves the dependency on the http client here
and ignore the missing classes.
2020-03-27 15:10:19 +01:00
Armin Braun 70b378cd1b
Upgrade GCS Dependency to 1.106.0 (#54092) (#54112)
* Upgrade GCS Dependency to 1.106.0 (#54092)

Upgrading GCS Dep + related dependencies as it seems some more retry bugs were fixed between .104 and .106
2020-03-25 19:05:01 +01:00
James Baiera b84c74cf70
Update the HDFS version used by HDFS Repo (#53693) (#54125) 2020-03-25 14:01:29 -04:00
Mark Vieira 7728ccd920
Encore consistent compile options across all projects (#54120)
(cherry picked from commit ddd068a7e92dc140774598664efdc15155ab05c2)
2020-03-25 08:24:21 -07:00
Armin Braun 4271963462
Revert "Use Azure Bulk Deletes in Azure Repository (#53919)" (#54089) (#54111)
This reverts commit 23cccf088810b8416ed278571352393cc2de9523.
Unfortunately SAS token auth still doesn't work with bulk deletes so we can't use them yet.

Closes #54080
2020-03-25 12:13:25 +01:00
Ioannis Kakavas 7d4ae7d982
Upgrade Tika to 1.24 (#54130) (#54150)
Also updates commons-compress to 1.19, pdfbox to 2.0.19 and
POI to 4.1.2. Adds a compile dependency to commons-math3
3.6.1 and SparseBitSet 1.2
2020-03-25 11:03:26 +02:00
Alan Woodward 39d7d0dc10 Upgrade to lucene 8.5.0 release (#54077)
Upgrades our lucene dependency to the released 8.5.0 version.
2020-03-24 13:45:50 +00:00
Mark Vieira 70cfedf542
Refactor global build info plugin to leverage JavaInstallationRegistry (#54026)
This commit removes the configuration time vs execution time distinction
with regards to certain BuildParms properties. Because of the cost of
determining Java versions for configuration JDK locations we deferred
this until execution time. This had two main downsides. First, we had
to implement all this build logic in tasks, which required a bunch of
additional plumbing and complexity. Second, because some information
wasn't known during configuration time, we had to nest any build logic
that depended on this in awkward callbacks.

We now defer to the JavaInstallationRegistry recently added in Gradle.
This utility uses a much more efficient method for probing Java
installations vs our jrunscript implementation. This, combined with some
optimizations to avoid probing the current JVM as well as deferring
some evaluation via Providers when probing installations for BWC builds
we can maintain effectively the same configuration time performance
while removing a bunch of complexity and runtime cost (snapshotting
inputs for the GenerateGlobalBuildInfoTask was very expensive). The end
result should be a much more responsive build execution in almost all
scenarios.

(cherry picked from commit ecdbd37f2e0f0447ed574b306adb64c19adc3ce1)
2020-03-23 15:30:10 -07:00
Namgyu Kim bc2289c258 Add nori_number token filter in analysis-nori (#53583)
This change adds the `nori_number` token filter.
It also adds a `discard_punctuation` option in nori_tokenizer that should be used in conjunction with the new filter.
2020-03-23 19:53:34 +01:00
Armin Braun 754d071c4e
Upgrade to AWS SDK 1.11.749 (#53962) (#53974)
Upgrading AWS SDK to v1.11.749.
Required building clients inside privileged contexts because some class loading that requires privileges now happens there and working around a new SDK bug in the S3 client builder.

Closes #53191
2020-03-23 15:31:29 +01:00
Armin Braun b51ea25a00
Use Azure Bulk Deletes in Azure Repository (#53919) (#53967)
Now that we upgraded the Azure SDK to 8.6.2 in #53865 we can make use of
bulk deletes.
2020-03-23 13:35:05 +01:00
Armin Braun 69a35158ce
Fix Azure Repository with HTTPs Endpoint (#53903) (#53963)
Upgrading to 8.6.2 in #53865 broke running against HTTPs endpoints (and hence real azure)
because the https url connection needs the newly added permission to work.
2020-03-23 12:16:33 +01:00