Significance score doubles were being parsed as long. Existing tests did not catch this because SignificantLongTermsTests and SignificantStringTermsTests did not set the score. Fixed these and also added integration test.
Thanks for the report/fix, Blakko
Closes#32770
The request and response classes have been extracted from `IndexUpgradeInfoAction` into top-level classes, and moved to the protocol jar. The `UpgradeActionRequired` enum is also moved.
Relates to #29827
LoggingDeprecationHandler requires log4j2 but we don't require log4j2 in
the client. This bans LoggingDeprecationHandler and removes all uses of
it in the high level client.
Closes#32151
* Adds REST client support for PutOperationMode in ILM
* Corrects licence headers
* iter
* add request converter test
* Fixes tests
* Creates start and stop actions for controlling ILM operation
* Addresses review comments
This commit adds back the publishing section that sets the artifact id
of the generated pom file for the high level rest client. This was
accidentally removed during a consolidationo of the shadow plugin logic.
We previously discussed moving the classes extending `AcknowledgedResponse` to
simply use `AcknowledgedResponse`, making the class non-abstract.
This moves the first class to do this, removing `WritePipelineResponse` in the
process.
If we like the way this looks, I will switch the remaining classes over to using
`AcknowledgedResponse`.
Since replica counts and allocation rules are set separately, it is not always clear how many replicas are to be allocated in the allocate action. Moving the replicas action to occur at the same time as the allocate action, resolves this confusion that could end an undesired state. This means that the ReplicasAction is removed, and a new optional replicas parameter is added to AllocateAction.
Suggestion responses were previously serialized as streamables which
made writing suggesters in plugins with custom suggestion response types
impossible. This commit makes them serialized as named writeables and
provides a facility for registering a reader for suggestion responses
when registering a suggester.
This also makes Suggestion responses abstract, requiring a suggester
implementation to provide its own types. Suggesters which do not need
anything additional to what is defined in Suggest.Suggestion should
provide a minimal subclass.
The existing plugin suggester integration tests are removed and
replaced with an equivalent implementation as an example
plugin.
Rest HL client: Add get license action
Continues to use String instead of a more complex License class to
hold the license text similarly to put license.
Relates #29827
The testPutLicense test tries to put a license generated using
snapshot keys into release cluster. This commit suppresses the
test during the release builds.
Closes#32580
The commercial clients were improperly placed into XPackClient, which is
a wrapper for the miscellaneous usage and info APIs. This commit moves
them into the HLRC.
This commit does the following:
- renames index-lifecycle plugin to ilm
- modifies the endpoints to ilm instead of index_lifecycle
- drops _xpack from the endpoints
- drops a few duplicate endpoints
Removes shadowing from the benchmarks. It isn't *strictly* needed. We do
have to rework the documentation on how to run the benchmark, but it
still seems to work if you run everything through gradle.
First, some background: we have 15 different methods to get a logger in
Elasticsearch but they can be broken down into three broad categories
based on what information is provided when building the logger.
Just a class like:
```
private static final Logger logger = ESLoggerFactory.getLogger(ActionModule.class);
```
or:
```
protected final Logger logger = Loggers.getLogger(getClass());
```
The class and settings:
```
this.logger = Loggers.getLogger(getClass(), settings);
```
Or more information like:
```
Loggers.getLogger("index.store.deletes", settings, shardId)
```
The goal of the "class and settings" variant is to attach the node name
to the logger. Because we don't always have the settings available, we
often use the "just a class" variant and get loggers without node names
attached. There isn't any real consistency here. Some loggers get the
node name because it is convenient and some do not.
This change makes the node name available to all loggers all the time.
Almost. There are some caveats are testing that I'll get to. But in
*production* code the node name is node available to all loggers. This
means we can stop using the "class and settings" variants to fetch
loggers which was the real goal here, but a pleasant side effect is that
the ndoe name is now consitent on every log line and optional by editing
the logging pattern. This is all powered by setting the node name
statically on a logging formatter very early in initialization.
Now to tests: tests can't set the node name statically because
subclasses of `ESIntegTestCase` run many nodes in the same jvm, even in
the same class loader. Also, lots of tests don't run with a real node so
they don't *have* a node name at all. To support multiple nodes in the
same JVM tests suss out the node name from the thread name which works
surprisingly well and easy to test in a nice way. For those threads
that are not part of an `ESIntegTestCase` node we stick whatever useful
information we can get form the thread name in the place of the node
name. This allows us to keep the logger format consistent.
This adds HLRC support for the ILM operation of setting an index's lifecycle
policy.
It also includes extracting and renaming a number of classes (like the request
and response objects) as well as the addition of a new `IndexLifecycleClient`
for the HLRC. This is a prerequisite to making the `index.lifecycle.name`
setting internal only, because we require a dedicated REST endpoint to change
the policy, and our tests currently set this setting with the REST client
multiple places. A subsequent PR will change the setting to be internal and move
those uses over to this new API.
This misses some links to the documentation because I don't think ILM has any
documentation available yet.
Relates to #29827 and #29823
In the HL REST client we replace the License object with a string, because of
complexity of this class. It is also not really needed on the client side since
end-users are not interacting with the license besides passing it as a string
to the server.
Relates #29827
This adds the ERR metric to the provided xContent parsers in the module and the
high level rest client registry. Also adding integration tests to make sure the
metric is correctly registered and usable from the client.
The notion of "quality" is an overloaded term in the search ranking evaluation
context. Its usually used to decribe certain levels of "good" vs. "bad" of a
seach result with respect to the users information need. We currently report the
result of the ranking evaluation as `quality_level` which is a bit missleading.
This changes the response parameter name to `metric_score` which fits better.
Currently the ranking evaluation response contains a 'unknown_docs' section
for each search use case in the evaluation set. It contains document ids for
results in the search hits that currently don't have a quality rating.
This change renames it to `unrated_docs`, which better reflects its purpose.
Relates #29827
This implementation behaves like the current transport client, that you basically cannot configure a Watch POJO representation as an argument to the put watch API, but only a bytes reference. You can use the the `WatchSourceBuilder` from the `org.elasticsearch.plugin:x-pack-core` dependency to build watches.
This commit also changes the license type to trial, so that watcher is available in high level rest client tests.
/cc @hub-cap
* Detect and prevent configuration that triggers a Gradle bug
As we found in #31862, this can lead to a lot of wasted time as it's not
immediatly obvius what's going on.
Givent how many projects we have it's getting increasingly easier to run
into gradle/gradle#847.
Moves the customizations to the build to produce nice shadow jars and
javadocs into common build code, mostly BuildPlugin with a little into
the root build.gradle file. This means that any project that applies the
shadow plugin will automatically be set up just like the high level rest
client:
* The non-shadow jar will not be built
* The shadow jar will not have a "classifier"
* Tests will run against the shadow jar
* Javadoc will include all of the shadowed classes
* Service files in `META-INF/services` will be merged
We have been encountering name mismatches between API defined in our
REST spec and method names that have been added to the high-level REST
client. We should check this automatically to prevent furher mismatches,
and correct all the current ones.
This commit adds a test for this and corrects the issues found by it.
Ensure our tests can run in a FIPS JVM
JKS keystores cannot be used in a FIPS JVM as attempting to use one
in order to init a KeyManagerFactory or a TrustManagerFactory is not
allowed.( JKS keystore algorithms for private key encryption are not
FIPS 140 approved)
This commit replaces JKS keystores in our tests with the
corresponding PEM encoded key and certificates both for key and trust
configurations.
Whenever it's not possible to refactor the test, i.e. when we are
testing that we can load a JKS keystore, etc. we attempt to
mute the test when we are running in FIPS 140 JVM. Testing for the
JVM is naive and is based on the name of the security provider as
we would control the testing infrastrtucture and so this would be
reliable enough.
Other cases of tests being muted are the ones that involve custom
TrustStoreManagers or KeyStoreManagers, null TLS Ciphers and the
SAMLAuthneticator class as we cannot sign XML documents in the
way we were doing. SAMLAuthenticator tests in a FIPS JVM can be
reenabled with precomputed and signed SAML messages at a later stage.
IT will be covered in a subsequent PR
There have been changes in error messages for `SSLHandshakeException`.
This has caused a couple of failures in our tests.
This commit modifies test verification to assert on exception type of
class `SSLHandshakeException`.
There was another issue in Java11 which caused NPE. The bug has now
been fixed on Java11 - early access build 22.
Bug Ref: https://bugs.java.com/bugdatabase/view_bug.do?bug_id=8206355
Enable the skipped tests due to this bug.
Closes#31940
Java 11 seems to get more verbose on the ClassCastException we check for in
SearchDocumentationIT. This changes the test from asserting the exact exception
message to only checking the two classes involved are part of the message.
Closes#32029
Make SnapshotInfo and CreateSnapshotResponse parsers lenient for backwards compatibility. Remove extraneous fields from CreateSnapshotRequest toXContent.
* Adds a new auto-interval date histogram
This change adds a new type of histogram aggregation called `auto_date_histogram` where you can specify the target number of buckets you require and it will find an appropriate interval for the returned buckets. The aggregation works by first collecting documents in buckets at second interval, when it has created more than the target number of buckets it merges these buckets into minute interval bucket and continues collecting until it reaches the target number of buckets again. It will keep merging buckets when it exceeds the target until either collection is finished or the highest interval (currently years) is reached. A similar process happens at reduce time.
This aggregation intentionally does not support min_doc_count, offest and extended_bounds to keep the already complex logic from becoming more complex. The aggregation accepts sub-aggregations but will always operate in `breadth_first` mode deferring the computation of sub-aggregations until the final buckets from the shard are known. min_doc_count is effectively hard-coded to zero meaning that we will insert empty buckets where necessary.
Closes#9572
* Adds documentation
* Added sub aggregator test
* Fixes failing docs test
* Brings branch up to date with master changes
* trying to get tests to pass again
* Fixes multiBucketConsumer accounting
* Collects more buckets than needed on shards
This gives us more options at reduce time in terms of how we do the
final merge of the buckeets to produce the final result
* Revert "Collects more buckets than needed on shards"
This reverts commit 993c782d117892af9a3c86a51921cdee630a3ac5.
* Adds ability to merge within a rounding
* Fixes nonn-timezone doc test failure
* Fix time zone tests
* iterates on tests
* Adds test case and documentation changes
Added some notes in the documentation about the intervals that can bbe
returned.
Also added a test case that utilises the merging of conseecutive buckets
* Fixes performance bug
The bug meant that getAppropriate rounding look a huge amount of time
if the range of the data was large but also sparsely populated. In
these situations the rounding would be very low so iterating through
the rounding values from the min key to the max keey look a long time
(~120 seconds in one test).
The solution is to add a rough estimate first which chooses the
rounding based just on the long values of the min and max keeys alone
but selects the rounding one lower than the one it thinks is
appropriate so the accurate method can choose the final rounding taking
into account the fact that intervals are not always fixed length.
Thee commit also adds more tests
* Changes to only do complex reduction on final reduce
* merge latest with master
* correct tests and add a new test case for 10k buckets
* refactor to perform bucket number check in innerBuild
* correctly derive bucket setting, update tests to increase bucket threshold
* fix checkstyle
* address code review comments
* add documentation for default buckets
* fix typo
This commit adds the _xpack/usage api to the high level rest client.
Currently in the transport api, the usage data is exposed in a limited
fashion, at most giving one level of helper methods for the inner keys
of data, but then exposing thos subobjects as maps of objects. Rather
than making parsers for every set of usage data from each feature, this
PR exposes the entire set of usage data as a map of maps.
In #29623 we added `Request` object flavored requests to the low level
REST client and in #30315 we deprecated the old `performRequest`s. This
changes all calls in the `client/rest` project to use the new versions.
In #29623 we added `Request` object flavored requests to the low level
REST client and in #30315 we deprecated the old `performRequest`s. This
changes all calls in the `client/rest-high-level` project to use the new
versions.
The `:x-pack:protocol` project is an implementation detail shared by the
xpack projects and the high level rest client and really doesn't deserve
its own maven coordinants and published javadoc. This change bundles
`:x-pack:protocol` into the high level rest client.
Relates to #29827
Originally I put the X-Pack info object into the top level rest client
object. I did that because we thought we'd like to squash `xpack` from
the name of the X-Pack APIs now that it is part of the default
distribution. We still kind of want to do that, but at least for now we
feel like it is better to keep the high level rest client aligned with
the other language clients like C# and Python. This shifts the X-Pack
info API to align with its json spec file.
Relates to #31870
This is the first x-pack API we're adding to the high level REST client
so there is a lot to talk about here!
= Open source
The *client* for these APIs is open source. We're taking the previously
Elastic licensed files used for the `Request` and `Response` objects and
relicensing them under the Apache 2 license.
The implementation of these features is staying under the Elastic
license. This lines up with how the rest of the Elasticsearch language
clients work.
= Location of the new files
We're moving all of the `Request` and `Response` objects that we're
relicensing to the `x-pack/protocol` directory. We're adding a copy of
the Apache 2 license to the root fo the `x-pack/protocol` directory to
line up with the language in the root `LICENSE.txt` file. All files in
this directory will have the Apache 2 license header as well. We don't
want there to be any confusion. Even though the files are under the
`x-pack` directory, they are Apache 2 licensed.
We chose this particular directory layout because it keeps the X-Pack
stuff together and easier to think about.
= Location of the API in the REST client
We've been following the layout of the rest-api-spec files for other
APIs and we plan to do this for the X-Pack APIs with one exception:
we're dropping the `xpack` from the name of most of the APIs. So
`xpack.graph.explore` will become `graph().explore()` and
`xpack.license.get` will become `license().get()`.
`xpack.info` and `xpack.usage` are special here though because they
don't belong to any proper category. For now I'm just calling
`xpack.info` `xPackInfo()` and intend to call usage `xPackUsage` though
I'm not convinced that this is the final name for them. But it does get
us started.
= Jars, jars everywhere!
This change makes the `xpack:protocol` project a `compile` scoped
dependency of the `x-pack:plugin:core` and `client:rest-high-level`
projects. I intend to keep it a compile scoped dependency of
`x-pack:plugin:core` but I intend to bundle the contents of the protocol
jar into the `client:rest-high-level` jar in a follow up. This change
has grown large enough at this point.
In that followup I'll address javadoc issues as well.
= Breaking-Java
This breaks that transport client by a few classes around. We've
traditionally been ok with doing this to the transport client.
This is a followup to #31537. It makes a number of changes requested by
a review that came after the PR was merged. These are mostly cleanups
and doc improvements.
This PR does the server side work for adding the Get Index API to the REST
high-level-client, namely moving resolving default settings to the
transport action. A follow up would be the client side changes.
Some proxies require all requests to have paths starting with / since
there are no relative paths at the HTTP connection level. Elasticsearch
assumes paths are absolute. In order to run rest tests against a cluster
behind such a proxy, set the system property
tests.rest.client_path_prefix to /.
* remove explicit wrapper task
It's created by Gradle and triggers a deprecation warning
Simplify configuration
* Upgrade shadow plugin to get rid of Gradle deprecation
* Move compile configuration to base plugin
Solves Gradle deprecation warning from earlier Gradle versions
* Enable stable publishing in the Gradle build
* Replace usage of deprecated property
* bump Gradle version in build compare
* Move to Gradle 4.8 RC1
* Use latest version of plugin
The current does not work with Gradle 4.8 RC1
* Switch to Gradle GA
* Add and configure build compare plugin
* add work-around for https://github.com/gradle/gradle/issues/5692
* work around https://github.com/gradle/gradle/issues/5696
* Make use of Gradle build compare with reference project
* Make the manifest more compare friendly
* Clear the manifest in compare friendly mode
* Remove animalsniffer from buildscript classpath
* Fix javadoc errors
* Fix doc issues
* reference Gradle issues in comments
* Conditionally configure build compare
* Fix some more doclint issues
* fix typo in build script
* Add sanity check to make sure the test task was replaced
Relates to #31324. It seems like Gradle has an inconsistent behavior and
the taks is not always replaced.
* Include number of non conforming tasks in the exception.
* No longer replace test task, create implicit instead
Closes#31324. The issue has full context in comments.
With this change the `test` task becomes nothing more than an alias for `utest`.
Some of the stand alone tests that had a `test` task now have `integTest`, and a
few of them that used to have `integTest` to run multiple tests now only
have `check`.
This will also help separarate unit/micro tests from integration tests.
* Revert "No longer replace test task, create implicit instead"
This reverts commit f1ebaf7d93e4a0a19e751109bf620477dc35023c.
* Fix replacement of the test task
Based on information from gradle/gradle#5730 replace the task taking
into account the task providres.
Closes#31324.
* Only apply build comapare plugin if needed
* Make sure test runs before integTest
* Fix doclint aftter merge
* PR review comments
* Switch to Gradle 4.8.1 and remove workaround
* PR review comments
* Consolidate task ordering
The test could sometimes generate an empty array of random stored fields
to fetch which would cause the test to fail. It is fine not to allow
empty here because we already randomize whether or not to generate the
stored fields param anyway.
Added support to the high-level rest client for the create snapshot API call. This required
several changes to toXContent which may need to be cleaned up in a later PR. Also
added several parsers for fromXContent to be able to retrieve appropriate responses
along with tests.
We recently introduced a mechanism that allows to specify a node
selector as part of do sections (see #31471). When a node selector that
is not the default one is configured, a new client will be initialized
with the same properties as the default one, but with the specified
node selector. This commit improves such mechanism but also closing
the additional clients being created and adding equals/hashcode impl to
the custom node selector as they are cached into a map.
TransportAction currently contains 2 doExecute methods, one which takes
a the task, and one that does not. The latter is what some subclasses
implement, while the first one just calls the latter, dropping the given
task. This commit combines these methods, in favor of just always
assuming a task is present.
We have made node selectors configurable per request, but all
of other language clients don't allow for that.
A good reason not to do so, is that having a different node selector
per request breaks round-robin. This commit makes NodeSelector
configurable only at client initialization. It also improves the docs
on this matter, important given that a single node selector can still
affect round-robin.
Most transport actions don't need the node ThreadPool. This commit
removes the ThreadPool as a super constructor parameter for
TransportAction. The actions that do need the thread pool then have a
member added to keep it from their own constructor.
Most transport actions don't need to resolve index names. This commit
removes the index name resolver as a super constructor parameter for
TransportAction. The actions that do need the resolver then have a
member added to keep the resolver from their own constructor.
Since #30966, Action no longer has anything but a call to the
GenericAction super constructor. This commit renames GenericAction
into Action, thus eliminating the Action class. Additionally, this
commit removes the Request generic parameter of the class, since
it was unused.
Add a `NodeSelector` so that users can filter the nodes that receive
requests based on node attributes.
I believe we'll need this to backport #30523 and we want it anyway.
I also added a bash script to help with rebuilding the sniffer parsing
test documents.
While the other two ranking evaluation metrics (precicion and reciprocal rank)
already provide a more detailed output for how their score is calculated, the
discounted cumulative gain metric (dcg) and its normalized variant are lacking
this until now. Its not really clear which level of detail might be useful for
debugging and understanding the final metric calculation, but this change adds a
`metric_details` section to REST output that contains some information about the
evaluation details.
With #29331 we added support for the cluster health API to the
high-level REST client. The transport client does not support the level
parameter, and it always returns all the info needed for shards level
rendering. We have maintained that behaviour when adding support for
cluster health to the high-level REST client, to ease migration, but the
correct thing to do is to default the high-level REST client to
`cluster` level, which is the same default as when going through the
Elasticsearch REST layer.
The default wait_for_active_shards is NONE for cluster health, which
differs from all the other API in master, hence we need to make sure to
set the parameter whenever it differs from NONE (0). The test around
this also had a bug, which is why this was not originally uncovered.
Relates to #29331
This commit removes all the API methods that accept a `Header` varargs
argument, in favour of the newly introduced API methods that accept a
`RequestOptions` argument.
Relates to #31069
The response currently implements ToXContentFragment although the only time it's used
it is supposed to print out a complete object rather than a fragment. Note that this
is the client version of the response, used only in the high-level client.
Given the weirdness of the response returned by the get alias API, we went for a client specific response, which allows us to hold the error message, exception and status returned as part of the response together with aliases. See #30536 .
Relates to #27205
Allows users of the Low Level REST client to specify which hosts a
request should be run on. They implement the `NodeSelector` interface
or reuse a built in selector like `NOT_MASTER_ONLY` to chose which nodes
are valid. Using it looks like:
```
Request request = new Request("POST", "/foo/_search");
RequestOptions options = request.getOptions().toBuilder();
options.setNodeSelector(NodeSelector.NOT_MASTER_ONLY);
request.setOptions(options);
...
```
This introduces a new `Node` object which contains a `HttpHost` and the
metadata about the host. At this point that metadata is just `version`
and `roles` but I plan to add node attributes in a followup. The
canonical way to **get** this metadata is to use the `Sniffer` to pull
the information from the Elasticsearch cluster.
I've marked this as "breaking-java" because it breaks custom
implementations of `HostsSniffer` by renaming the interface to
`NodesSniffer` and by changing it from returning a `List<HttpHost>` to a
`List<Node>`. It *shouldn't* break anyone else though.
Because we expect to find it useful, this also implements `host_selector`
support to `do` statements in the yaml tests. Using it looks a little
like:
```
---
"example test":
- skip:
features: host_selector
- do:
host_selector:
version: " - 7.0.0" # same syntax as skip
apiname:
something: true
```
The `do` section parses the `version` string into a host selector that
uses the same version comparison logic as the `skip` section. When the
`do` section is executed it passed the off to the `RestClient`, using
the `ElasticsearchHostsSniffer` to sniff the required metadata.
The idea is to use this in mixed version tests to target a specific
version of Elasticsearch so we can be sure about the deprecation
logging though we don't currently have any examples that need it. We do,
however, have at least one open pull request that requires something
like this to properly test it.
Closes#21888
The goal of this commit is to address unknown licenses when producing
the dependencies info report. We have two different checks that we run
on licenses. The first check is whether or not we have stashed a copy of
the license text for a dependency in the repository. The second is to
map every dependency to a license type (e.g., BSD 3-clause). The problem
here is that the way we were handling licenses in the second check
differs from how we handle licenses in the first check. The first check
works by finding a license file with the name of the artifact followed
by the text -LICENSE.txt. Yet in some cases we allow mapping an artifact
name to another name used to check for the license (e.g., we map
lucene-.* to lucene, and opensaml-.* to shibboleth. The second check
understood the first way of looking for a license file but not the
second way. So in this commit we teach the second check about the
mappings from artifact names to license names. We do this by copying the
configuration from the dependencyLicenses task to the dependenciesInfo
task and then reusing the code from the first check in the second
check. There were some other challenges here though. For example,
dependenciesInfo was checking too many dependencies. For now, we should
only be checking direct dependencies and leaving transitive dependencies
from another org.elasticsearch artifact to that artifact (we want to do
this differently in a follow-up). We also want to disable
dependenciesInfo for projects that we do not publish, users only care
about licenses they might be exposed to if they use our assembled
products. With all of the changes in this commit we have eliminated all
unknown licenses. A follow-up will enforce that when we add a new
dependency it does not get mapped to unknown, these will be forbidden in
the future. Therefore, with this change and earlier changes are left
having no unknown licenses and two custom licenses; custom here means it
does not map to an SPDX license type. Those two licenses are xz and
ldapsdk. A future change will not allow additional custom licenses
unless they are explicitly whitelisted. This ensures that if a new
dependency is added it is mapped to an SPDX license or mapped to custom
because it does not have an SPDX license.
* Initial commit of rest high level exposure of cancel task
* fix javadocs
* address some code review comments
* update branch to use tasks namespace instead of cluster
* High-level client: list tasks failure to not lose nodeId
This commit reworks testing for `ListTasksResponse` so that random
fields insertion can be tested and xcontent equivalence can be checked
too. Proper exclusions need to be configured, and failures need to be
tested separately. This helped finding a little problem, whenever there
is a node failure returned, the nodeId was lost as it was never printed
out as part of the exception toXContent.
* added comment
* merge from master
* re-work CancelTasksResponseTests to separate XContent failure cases from non-failure cases
* remove duplication of logic in parser creation
* code review changes
* refactor TasksClient to support RequestOptions
* add tests for parent task id
* address final PR review comments, mostly formatting and such
We no longer need animal sniffer because we use JDK functionality
(introduced in JDK 9) to target older versions of the JDK for
compilation. This functionality means that the JDK handles the problem
of ensuring that we do not use JDK APIs from the version that we are
compiling from that are not available in the version that we are
compiling to. A previous commit removed this for the REST client (where
we target JDK 7) but a few traces were left behind.
With #30490 we have introduced a new way to provide request options
whenever sending a request using the high-level REST client. Before you
could provide headers as the last argument varargs of each API method,
now you can provide `RequestOptions` that in the future will allow to
provide more options which can be specified per request.
This commit deprecates all of the client methods that accept a `Header`
varargs argument in favour of new methods that accept `RequestOptions`
instead. For some API we don't even go through deprecation given that
they were not released since they were added, hence in that case we can
just move them to the new method.
This modifies the high level rest client to allow calling code to
customize per request options for the bulk API. You do the actual
customization by passing a `RequestOptions` object to the API call
which is set on the `Request` that is generated by the high level
client. It also makes the `RequestOptions` a thing in the low level
rest client. For now that just means you use it to customize the
headers and the `httpAsyncResponseConsumerFactory` and we'll add
node selectors and per request timeouts in a follow up.
I only implemented this on the bulk API because it is the first one
in the list alphabetically and I wanted to keep the change small
enough to review. I'll convert the remaining APIs in a followup.
This commit removes the RequestBuilder generic type from Action. It was
needed to be used by the newRequest method, which in turn was used by
client.prepareExecute. Both of these methods are now removed, along with
the existing users of prepareExecute constructing the appropriate
builder directly.
This commit reworks the Sniffer component to simplify it and make it possible to test it.
In particular, it no longer takes out the host that failed when sniffing on failure, but rather relies on whatever the cluster returns. This is the result of some valid comments from #27985. Taking out one single host is too naive, hard to test and debug.
A new Scheduler abstraction is introduced to abstract the tasks scheduling away and make it possible to plug in any test implementation and take out timing aspects when testing.
Concurrency aspects have also been improved, synchronized methods are no longer required. At the same time, we were able to take #27697 and #25701 into account and fix them, especially now that we can more easily add tests.
Last but not least, unit tests are added for the Sniffer component, long overdue.
Closes#27697Closes#25701
This commit adds Verify Repository, the associated docs and tests for
the high level REST API client. A few small changes to the Verify
Repository Response went into the commit as well.
Relates #27205
Our API spec define the tasks API as e.g. tasks.list, meaning that they belong to their own namespace. This commit moves them from the cluster namespace to their own namespace.
Relates to #29546
Adding headers rather than setting them all at once seems more
user-friendly and we already do it in a similar way for parameters
(see Request#addParameter).
This commit adds a max wait timeout of one second to all the latch.await
calls made in RestClientTests. It also makes clearer that the `onSuccess`
listener method will never be called given that the underlying http
client is mocked and makes sure that `latch.countDown` is always called
This commit adds Delete Repository, the associated docs and tests for
the high level REST API client. It also cleans up a seemingly innocuous
line in the RestDeleteRepositoryAction and some naming in SnapshotIT.
Relates #27205
The default behaviour for Apache HTTP client is to mimic the standard
browser behaviour of clearing the authentication cache (for a given
host) if that host responds with 401.
This behaviour is appropriate in a interactive browser environment
where the user is given the opportunity to provide alternative
credentials, but it is not the preferred behaviour for the ES REST
client.
X-Pack may respond with a 401 status if a request is made before the
node/cluster has recovered sufficient state to know how to handle the
provided authentication credentials - for example the security index
need to be recovered before we can authenticate native users.
In these cases the correct behaviour is to retry with the same
credentials (rather than discarding those credentials).
This change adds a `listTasks` method to the high level java
ClusterClient which allows listing running tasks through the
task management API.
Related to #27205
This commit adds Create Repository, the associated docs and tests
for the high level REST API client. A few small changes to the
PutRepository Request and Response went into the commit as well.
This commit changes the default out-of-the-box configuration for the
number of shards from five to one. We think this will help address a
common problem of oversharding. For users with time-based indices that
need a different default, this can be managed with index templates. For
users with non-time-based indices that find they need to re-shard with
the split API in place they no longer need to resort only to
reindexing.
Since this has the impact of changing the default number of shards used
in REST tests, we want to ensure that we still have coverage for issues
that could arise from multiple shards. As such, we randomize (rarely)
the default number of shards in REST tests to two. This is managed via a
global index template. However, some tests check the templates that are
in the cluster state during the test. Since this template is randomly
there, we need a way for tests to skip adding the template used to set
the number of shards to two. For this we add the default_shards feature
skip. To avoid having to write our docs in a complicated way because
sometimes they might be behind one shard, and sometimes they might be
behind two shards we apply the default_shards feature skip to all docs
tests. That is, these tests will always run with the default number of
shards (one).
The High Level REST Client's documentation suggested that users should
use the Low Level REST Client for index management activities. This
change removes that suggestion because the high level REST client
supports those APIs now.
This also changes the examples in the migration docs to that still use
the Low Level REST Client to use the non-deprecated varieats of
`performRequest`.
These tests are sharing the same server and client for every test. Yet,
we are seeing some tests fail with mysterious connection resets. It is
not clear what is happening but one theory is that the tests are
interfering with each other. This commit moves to use a separate server
and client per test.
Previously `BulkProcessor` retry logic was based on the exception type of the failed response (`EsRejectedExecutionException`). This commit changes it to be based on the returned status code. This allows us to reproduce the same retry behaviour when the `BulkProcessor` is used from the high-level REST client, which was previously not the case as we cannot rebuild the same exception type when parsing back the response. This change has no effect on the transport client.
Closes#28885
This commit adds the Snapshot Client with a first API call within it,
the get repositories call in snapshot/restore module. This also creates
a snapshot namespace for the docs, as well as get repositories docs.
Relates #27205
We've been setting this value to 500ms in the default low-level REST
client configuration, misunderstanding the effect that it would have.
This proved very problematic, as it ends up causing `TimeoutException`
returned from the leased pool in some cases even for successful requests.
Closes#24069
Deprecate the many arguments versions of `performRequest` and
`performRequestAsync` in favor of the `Request` object flavored variants
introduced in #29623. We'll be dropping the many arguments variants in
7.0 because they make it difficult to add new features in a backwards
compatible way and they create a *ton* of intellisense noise.
This PR adds support for the Get Settings API to the java high-level rest client.
Furthermore, logic related to the retrieval of default settings has been moved from the rest layer into the transport layer and now default settings may be retrieved consistency via both the rest API and the transport API.
Adds two new methods to `RestClient` that take a `Request` object. These
methods will allows us to add more per-request customizable options
without creating more and more and more overloads of the `performRequest`
and `performRequestAsync` methods. These new methods look like:
```
Response performRequest(Request request)
```
and
```
void performRequestAsync(Request request, ResponseListener responseListener)
```
This change doesn't add any actual features but enables adding things like
per request timeouts and per request node selectors. This change *does*
rework the `HighLevelRestClient` and its tests to use these new `Request`
objects and it does update the docs.
This *mostly* silences `javadoc`'s warning about defaulting to
generating html4 files by enabling generating html5 file for the
projects for which that works. It didn't work in a half dozen projects,
about half of which I've fixed in this PR, entirely by replacing
`<tt>thing</tt>` with `{@code thing}`.
There are a few remaining projects that contain javadoc with invalid
html5. I'll fix those projects in a followup.
The low-level REST client targets JDK 7. To avoid compiling against JDK
functionality not available in JDK 7, we use animal sniffer. However,
when we switched to using the JDK 9 and now the JDK 10 compiler which
has built-in support for targeting previous JDKs, we no longer need to
use animal sniffer. This is because the JDK is now packaged with the
signatures needed to ensure that when we target JDK 7 at compile-time it
is detected that we are only using JDK 7 functionality. This commit
removes the use of animal sniffer from the low-level REST client build.
This commit adds the distribution type to the startup scripts so that we
can discern from log output and the main response the type of the
distribution (deb/rpm/tar/zip).
As part of adding support for new API to the high-level REST client,
we added support for the `flat_settings` parameter to some of our
request classes. We added documentation that such flag is only ever
read by the high-level REST client, but the truth is that it doesn't
do anything given that settings are always parsed back into a `Settings`
object, no matter whether they are returned in a flat format or not.
It was a mistake to add support for this flag in the context of the
high-level REST client, hence this commit removes it.
The following is the current behaviour, tested now through a specific
test.
The low-level REST client doesn't add a leading wildcard when not
provided, unless a `pathPrefix` is configured in which case a trailing
slash will be automatically added when concatenating the prefix and the
provided uri.
Also when configuring a pathPrefix, if it doesn't start with a '/' it
will be modified by adding the missing leading '/'.
CRUD: Parsing changes for UpdateRequest (#29293)
Use `ObjectParser` to parse `UpdateRequest` so we reject unknown fields
and drop support for the `_fields` parameter because it was deprecated
in 5.x.
This change validates that the `_search` request does not have trailing
tokens after the main object and fails the request with a parsing exception otherwise.
Closes#28995
Some features have been deprecated since `6.0` like the `_parent` field or the
ability to have multiple types per index. This allows to remove quite some
code, which in-turn will hopefully make it easier to proceed with the removal
of types.
Currently the ranking evaluation API doesn't support many of the
standard parameters of the search API. Some of these make sense, like
adding support for the common indices options parameters, which this
change adds.
When the `BulkProcessor` is used with the high-level REST client, a scheduler is internally created that allows to schedule tasks. Such scheduler is not exposed to users and needs to be closed once the `BulkProcessor` is closed. There are two ways to close the `BulkProcessor` though, one is the ordinary `close` method and the other one is `awaitClose`. The former closes the scheduler while the latter doesn't, leaving threads lingering.
* Remove all dependencies from XContentBuilder
This commit removes all of the non-JDK dependencies from XContentBuilder, with
the exception of `CollectionUtils.ensureNoSelfReferences`. It adds a third
extension point around dealing with time-based fields and formatters to work
around the Joda dependency.
This decoupling allows us to be able to move XContentBuilder to a separate lib
so it can be available for things like the high level rest client.
Relates to #28504
This was the plan from day one but due to a silly bug nodes were immediately retried after they were marked as dead for the first time. From the second time on, the expected backoff was applied.
Some source files seem to have the execute bit (a+x) set, which doesn't
really seem to hurt but is a bit odd. This change removes those, making
the permissions similar to other source files in the repository.
Adds docs for `HighLevelRestClient#multiSearch`. Unlike the `multiGet`
docs these are much more sparse because multi-search doesn't support
setting many options on the `MultiSearchRequest` and instead just wraps
a list of `SearchRequest`s.
Closes#28389
Currently we store the indices specified in the request URL together with all
the other ranking evaluation specification in RankEvalSpec. This is not ideal
since e.g. the indices are not rendered to xContent and so cannot be parsed
back. Instead we should keep them in RankEvalRequest.
Adds SSLHandshakeException to the list of Exceptions that are
specifically rethrown from the async thread so its type is preserved.
This should make it easier to debug synchronous calls with ssl issues.
In the past the Low Level REST Client was super careful not to wrap
any exceptions that it throws from synchronous calls so that callers can
catch the exceptions and work with them. The trouble with that is that
the exceptions are originally thrown on the async thread pool and then
transfered back into calling thread. That means that the stack trace of
the exception doesn't have the calling method which is *super* *ultra*
confusing.
This change always wraps exceptions transferred from the async thread
pool so that the stack trace of the thrown exception contains the
caller's stack. It tries to preserve the type of the throw exception but
this is quite a fiddly thing to get right. We have to catch every type
of exception that we want to preserve, wrap with the same type and
rethrow. I've preserved the types of all exceptions that we had tests
mentioning but no other exceptions. The other exceptions are either
wrapped in `IOException` or `RuntimeException`.
Closes#28399
* Decouple XContentBuilder from BytesReference
This commit removes all mentions of `BytesReference` from `XContentBuilder`.
This is needed so that we can completely decouple the XContent code and move it
into its own dependency.
While this change appears large, it is due to two main changes, moving
`.bytes()` and `.string()` out of XContentBuilder itself into static methods
`BytesReference.bytes` and `Strings.toString` respectively. The rest of the
change is code reacting to these changes (the majority of it in tests).
Relates to #28504
The REST status 503 means "I can not handle the request that you sent
me." However today we respond to a main request with a 503 when there
are certain cluster blocks despite still responding with an actual main
response. This is broken, we should respond with a 200 status. This
commit removes this silliness.
The recent addition of the _cluster/settings API was merged together with another change that added encoding of the different URL parts. That broke the cluster PUT settings API straight-away. This commit fixes this problem. The '/' that's part of the /_cluster/settings endpoint should not be encoded.
The REST high-level client supports now encoding of path parts, so that for instance documents with valid ids, but containing characters that need to be encoded as part of urls (`#` etc.), are properly supported. We also make sure that each path part can contain `/` by encoding them properly too.
Closes#28625
* Move to non-deprecated XContentHelper.createParser(...)
This moves away from one of the now-deprecated XContentHelper.createParser
methods in favor of specifying the deprecation logger at parser creation time.
Relates to #28449
Note that this doesn't move all the `createParser` calls because some of them
use the already-deprecated method that doesn't specify the XContentType.
* Remove the deprecated (and now non-needed) createParser method
* [DOCS] expand examples on providing mappings for create index and put mapping
The create index API and put mappings API docs the for high-level Java REST client didn't have a lot of info on how to provide mappings. This commit adds some examples.
This commit splits the async execution documentation into 2 parts, one
for the async method itself and one for the action listener. This allows
to add more doc and to use CountDownLatches in doc tests to wait for
asynchronous operations to be completed before moving to the next test.
It also renames few files.
Related to #28457
Similarly to other documentation tests in the high level client, the
asynchronous operation like update, index or delete can make the test
fail if it sneak in between the middle of another operation.
This commit moves the async doc tests to be the last ones executed and
adds assert busy loops to ensure that the asynchronous operations are
correctly terminated.
closes#28446
Adds allow_partial_search_results flag to search requests with default setting = true.
When false, will error if search either timeouts, has partial errors or has missing shards rather
than returning partial search results. A cluster-level setting provides a default for search requests with no flag.
Closes#27435
This change adds support for the new ranking evaluation API to the High Level Rest Client.
This mostly means adding support for parsing the various response objects back from the
REST representation. It includes one change to the response syntax where previously we didn't
print the type of the metric details section but we now need it to pick the right parser to
parse this section back.
Closes#28198
Script fields can get a bit more complicated than just stored fields. A script can return null, an object and also an array. Extended parsing to support such valid values. Also renamed util method from `parseStoredFieldsValue` to `parseFieldsValue` given that it can parse stored fields but also script fields, anything that's returned as `fields`.
Closes#28380
It has been pointed out that GET with body may cause problems to some proxies. We are then switching to POST the API that retrieve info and support a request body.
Closes#28326
The search request body can never be null as `SearchRequest` doesn't allow the inner `SearchSourceBuilder` to be null. Instead, when search source is not set, the request body is going to be an empty json object (`{}``)
Today, the way to call them API under the indices namespace is by doing e.g. `client.indices().createIndex()`. Our spec define the API under the indices namespace as e.g. `indices.create`, hence there is no need to repeat the index suffix for each method as that is already defined by the namespace. Using the `index` suffix in each method was an oversight which must be corrected.
This is related to #27933. It introduces a jar named elasticsearch-core
in the lib directory. This commit moves the JarHell class from server to
elasticsearch-core. Additionally, PathUtils and some of Loggers are
moved as JarHell depends on them.
Several responses include the shards_acknowledged flag (indicating whether the
requisite number of shard copies started before the completion of the operation)
and there are two different getters used : isShardsAcknowledged() and isShardsAcked().
This PR deprecates the isShardsAcked() in favour of isShardsAcknowledged() in
CreateIndexResponse, RolloverResponse and CreateIndexClusterStateUpdateResponse.
Closes#27784
With this commit we upgrade the Gradle Shadow plugin that is used in our
benchmarks to version 2.0.2. This version does not use APIs that are
deprecated in Gradle 4.x.
* Adds task dependenciesInfo to BuildPlugin to generate a CSV file with dependencies information (name,version,url,license)
* Adds `ConcatFilesTask.groovy` to concatenates multiple files into one
* Adds task `:distribution:generateDependenciesReport` to concatenate `dependencies.csv` files into a single file (`es-dependencies.csv` by default)
# Examples:
$ gradle dependenciesInfo :distribution:generateDependenciesReport
## Use `csv` system property to customize the output file path
$ gradle dependenciesInfo :distribution:generateDependenciesReport -Dcsv=/tmp/elasticsearch-dependencies.csv
## When branch is not master, use `build.branch` system property to generate correct licenses URLs
$ gradle dependenciesInfo :distribution:generateDependenciesReport -Dbuild.branch=6.x -Dcsv=/tmp/elasticsearch-dependencies.csv
The last operation executed in IndicesClientDocumentationIT.testCreate()
is an asynchronous index creation. Because nothing waits for its
completion, on slow machines the index can sometimes be created after
the testCreate() test is finished, and it can fail the following test.
Closes#27754
This commit removes the usage of system properties for the HttpAsyncClient as this overrides some
defaults that we intentionally change. In order to set the default SSLContext to the system context
we set the SSLContext on the builder explicitly.
Closes#27827
This was already changed in 6.x as part of the backport of the recently added open and create index API. wait_for_active_shards can be a number but also "all", with this commit we verify that providing "all" works too.
Today we require users to prepare their indices for split operations.
Yet, we can do this automatically when an index is created which would
make the split feature a much more appealing option since it doesn't have
any 3rd party prerequisites anymore.
This change automatically sets the number of routinng shards such that
an index is guaranteed to be able to split once into twice as many shards.
The number of routing shards is scaled towards the default shard limit per index
such that indices with a smaller amount of shards can be split more often than
larger ones. For instance an index with 1 or 2 shards can be split 10x
(until it approaches 1024 shards) while an index created with 128 shards can only
be split 3x by a factor of 2. Please note this is just a default value and users
can still prepare their indices with `index.number_of_routing_shards` for custom
splitting.
NOTE: this change has an impact on the document distribution since we are changing
the hash space. Documents are still uniformly distributed across all shards but since
we are artificually changing the number of buckets in the consistent hashign space
document might be hashed into different shards compared to previous versions.
This is a 7.0 only change.
This change removes the module named aggs-composite and adds the `composite` aggs
as a core aggregation. This allows other plugins to use this new aggregation
and simplifies the integration in the HL rest client.
Today Cross Cluster Search requires at least one node in each remote cluster to be up once the cross cluster search is run. Otherwise the whole search request fails despite some of the data (either local and/or remote) is available. This happens when performing the _search/shards calls to find out which remote shards the query has to be executed on. This scenario is different from shard failures that may happen later on when the query is actually executed, in case e.g. remote shards are missing, which is not going to fail the whole request but rather yield partial results, and the _shards section in the response will indicate that.
This commit introduces a boolean setting per cluster called search.remote.$cluster_alias.skip_if_disconnected, set to false by default, which allows to skip certain clusters if they are down when trying to reach them through a cross cluster search requests. By default all clusters are mandatory.
Scroll requests support such setting too when they are first initiated (first search request with scroll parameter), but subsequent scroll rounds (_search/scroll endpoint) will fail if some of the remote clusters went down meanwhile.
The search API response contains now a new _clusters section, similar to the _shards section, that gets returned whenever one or more clusters were disconnected and got skipped:
"_clusters" : {
"total" : 3,
"successful" : 2,
"skipped" : 1
}
Such section won't be part of the response if no clusters have been skipped.
The per cluster skip_unavailable setting value has also been added to the output of the remote/info API.
Stardardize underscore requirements in parameters across different type of
requests:
_index, _type, _source, _id keep their underscores
params like version and retry_on_conflict will be without underscores
Throw an error if older versions of parameters are used
BulkRequest, MultiGetRequest, TermVectorcRequest, MoreLikeThisQuery
were changed
Closes#26886
* This change adds a module called `aggs-composite` that defines a new aggregation named `composite`.
The `composite` aggregation is a multi-buckets aggregation that creates composite buckets made of multiple sources.
The sources for each bucket can be defined as:
* A `terms` source, values are extracted from a field or a script.
* A `date_histogram` source, values are extracted from a date field and rounded to the provided interval.
This aggregation can be used to retrieve all buckets of a deeply nested aggregation by flattening the nested aggregation in composite buckets.
A composite buckets is composed of one value per source and is built for each document as the combinations of values in the provided sources.
For instance the following aggregation:
````
"test_agg": {
"terms": {
"field": "field1"
},
"aggs": {
"nested_test_agg":
"terms": {
"field": "field2"
}
}
}
````
... which retrieves the top N terms for `field1` and for each top term in `field1` the top N terms for `field2`, can be replaced by a `composite` aggregation in order to retrieve **all** the combinations of `field1`, `field2` in the matching documents:
````
"composite_agg": {
"composite": {
"sources": [
{
"field1": {
"terms": {
"field": "field1"
}
}
},
{
"field2": {
"terms": {
"field": "field2"
}
}
},
}
}
````
The response of the aggregation looks like this:
````
"aggregations": {
"composite_agg": {
"buckets": [
{
"key": {
"field1": "alabama",
"field2": "almanach"
},
"doc_count": 100
},
{
"key": {
"field1": "alabama",
"field2": "calendar"
},
"doc_count": 1
},
{
"key": {
"field1": "arizona",
"field2": "calendar"
},
"doc_count": 1
}
]
}
}
````
By default this aggregation returns 10 buckets sorted in ascending order of the composite key.
Pagination can be achieved by providing `after` values, the values of the composite key to aggregate after.
For instance the following aggregation will aggregate all composite keys that sorts after `arizona, calendar`:
````
"composite_agg": {
"composite": {
"after": {"field1": "alabama", "field2": "calendar"},
"size": 100,
"sources": [
{
"field1": {
"terms": {
"field": "field1"
}
}
},
{
"field2": {
"terms": {
"field": "field2"
}
}
}
}
}
````
This aggregation is optimized for indices that set an index sorting that match the composite source definition.
For instance the aggregation above could run faster on indices that defines an index sorting like this:
````
"settings": {
"index.sort.field": ["field1", "field2"]
}
````
In this case the `composite` aggregation can early terminate on each segment.
This aggregation also accepts multi-valued field but disables early termination for these fields even if index sorting matches the sources definition.
This is mandatory because index sorting picks only one value per document to perform the sort.
This is the first step to supporting WKT (and other future) format(s). The ShapeBuilders are quite messy and can be simplified by decoupling the parse logic from the build logic. This commit refactors the parsing logic into its own package separate from the Shape builders. It also decouples the GeoShapeType into a standalone enumerator that is responsible for validating the parsed data and providing the appropriate builder. This future-proofs the code making it easier to maintain and add new shape types.
While it's not possible to upgrade the Jackson dependencies
to their latest versions yet (see #27032 (comment) for more)
it's still possible to upgrade to the latest 2.8.x version.
The headers passed to reindex were skipped except for the last one. This
commit fixes the copying of the headers, as well as adds a base test
case for rest client builders to access the headers within the built
rest client.
relates #22976
Introduce minimal thread scheduler as a base class for `ThreadPool`. Such a class can be used from the `BulkProcessor` to schedule retries and the flush task. This allows to remove the `ThreadPool` dependency from `BulkProcessor`, which requires to provide settings that contain `node.name` and also needed log4j for logging. Instead, it needs now a `Scheduler` that is much lighter and gets automatically created and shut down on close.
Closes#26028
Upgrade to Jackson 2.9.2 and also use a boolean `closed` flag to
indicate that a FastStringReader instance is closed, so that length
is still correctly reported after the reader is closed.
The shard preference _primary, _replica and its variants were useful
for the asynchronous replication. However, with the current impl, they
are no longer useful and should be removed.
Closes#26335
Request class is currently package protected, making it difficult for
the users to extend the RestHighLevelClient and to use its protected
methods to execute requests. This commit makes the Request class public
and changes few methods of RestHighLevelClient to be protected.
This avoids messages with malformed URLs, like
"org.elasticsearch.client.ResponseException: PUT
http://127.0.0.1:9502customer: HTTP/1.1 400 Bad Request".
Relates #26564
It's easy to create a wrong Content-Type header when converting a
XContentType to a Apache HTTP ContentType instance.
This commit the direct usages of ContentType.create() methods in favor of a Request.createContentType(XContentType) method that does the right thing.
Closes#26438
At current, we do not feel there is enough of a reason to shade the low
level rest client. It caused problems with commons logging and IDE's
during the brief time it was used. We did not know exactly how many
users will need this, and decided that leaving shading out until we
gather more information is best. Users can still shade the jar
themselves. For information and feeback, see issue #26366.
Closes#26328
This reverts commit 3a20922046.
This reverts commit 2c271f0f22.
This reverts commit 9d10dbea39.
This reverts commit e816ef89a2.
This allows plugins to plug rescore implementations into
Elasticsearch. While this is a fairly expert thing to do I've
done my best to point folks to the QueryRescorer as one that at
least documents the tradeoffs that it makes. I've attempted to
limit the API surface area by removing `SearchContext` from the
exposed interface, instead exposing just the IndexSearcher and
`QueryShardContext`. I also tried to make some of the class names
more consistent and do some general cleanup while I was there.
I entertained the notion of moving the `QueryRescorer` to module.
After all, it'd be a wonderful test to prove that you can plug
rescore implementation into Elasticsearch if the only built in
rescore implementation is in the module. But I decided against it
because the new module would require a client jar and it'd require
moving some more things around. I think if we really want to do
it, we should do it as a followup.
I did, on the other hand, create an "example" rescore plugin which
should both be a nice example for anyone wanting to plug in their
own rescore implementation and servers as a good integration test
to make sure that you can indeed plug one in.
Closes#26208
The parser for the `ip_range` aggregation response is currently missing from the
NamedXContentRegistry in the high level rest client. Also changes the testing
around the expected number of parsers so we at least check that we register all
the parsers that we also test in InternalAggregationTestCase.
By making RestHighLevelClient Closeable, its close method will close the internal low-level REST client instance by default, which simplifies the way most users interact with the high-level client.
Its constructor accepts now a RestClientBuilder, which clarifies that the low-level REST client is internally created and managed.
It is still possible to provide an already built `RestClient` instance, but that can only be done by subclassing `RestHighLevelClient` and calling the protected constructor that accepts a `RestClient`. In such case a consumer has also to be provided, which controls what has to be done when the high-level client gets done.
Closes#26086
The client sniffer depends on the low-level REST client, while the Java high-level REST client and the transport client depend on Elasticsearch itself. Javadoc are not that useful unless they have links to the Elasticsearch classes in the latter case, and to the low-level REST client in the sniffer javadoc. This commit adds those links.
* A cycle was detected in eclipse, and was fixed in the same fashion as
core and core-tests.
* The rest client deps jar was not properly exported in the generated
eclipse classpath file for rest client.
Relates #25208
The low level rest client does not need the shadow plugin applied, it
only needs the plugin jar in the classpath, in order to create a
ShadowJar task.
Relates #25208
The configuration removed from the runtime configuration did not
properly remove the deps jar from gradle versions > 3.3. The rest client
now removes both the 3.3 and 3.3+ configurations so this works on both
versions of gradle.
Closes#25884
Relates #25208
This commit removes all external dependencies from the rest client jar
and shades them in an 'org.elasticsearch.client' package within the jar
using shadowJar gradle plugin. All projects that depended on the
existing jar have been converted to using the 'org.elasticsearch.client'
package prefixes to interact with the rest client.
Closes#25208
This commit calls the `useSystemProperties` method on the HttpAsyncClientBuilder so that the jvm
system properties are used. The primary reason for doing this is to ensure the builder uses the
system default SSLContext rather than the default instance created by the http client library.
Closes#23231
We currently use fielddata on the `_id` field which is trappy, especially as we
do it implicitly. This changes the `random_score` function to use doc ids when
no seed is provided and to suggest a field when a seed is provided.
For now the change only emits a deprecation warning when no field is supplied
but this should be replaced by a strict check on 7.0.
Closes#25240
This adds a section about how to add aggregations to the SearchSourceBuilder and how
to retrieve them from a SearchRepsonse to the documentation for the high level rest client.
It was brought up that our current client artifacts have generic names like 'rest' that may cause conflicts with other artifacts.
This commit renames:
- rest -> elasticsearch-rest-client
- sniffer -> elasticsearch-rest-client-sniffer
- rest-high-level -> elasticsearch-rest-high-level-client
A couple of small changes are also preparing the high level client for its first release.
Closes#20248
Today if we search across a large amount of shards we hit every shard. Yet, it's quite
common to search across an index pattern for time based indices but filtering will exclude
all results outside a certain time range ie. `now-3d`. While the search can potentially hit
hundreds of shards the majority of the shards might yield 0 results since there is not document
that is within this date range. Kibana for instance does this regularly but used `_field_stats`
to optimize the indexes they need to query. Now with the deprecation of `_field_stats` and it's upcoming removal a single dashboard in kibana can potentially turn into searches hitting hundreds or thousands of shards and that can easily cause search rejections even though the most of the requests are very likely super cheap and only need a query rewriting to early terminate with 0 results.
This change adds a pre-filter phase for searches that can, if the number of shards are higher than a the `pre_filter_shard_size` threshold (defaults to 128 shards), fan out to the shards
and check if the query can potentially match any documents at all. While false positives are possible, a negative response means that no matches are possible. These requests are not subject to rejection and can greatly reduce the number of shards a request needs to hit. The approach here is preferable to the kibana approach with field stats since it correctly handles aliases and uses the correct threadpools to execute these requests. Further it's completely transparent to the user and improves scalability of elasticsearch in general on large clusters.
Requests that execute a stored script will no longer be allowed to specify the lang of the script. This information is stored in the cluster state making only an id necessary to execute against. Putting a stored script will still require a lang.
This change collapses some of the packages for the bucket aggregations into their parent packages. This was done for the following aggregations:
* The variants of the range aggregation (geo_distance, date and ip) were moved into the `o.e.s.a.bucket.range` package
* The `o.e.s.a.bucket.terms.support` package was removed and the classes were moved to `o.e.s.a.bucket.terms`
* The filter aggregation was moved to `o.e.s.a.bucket.filter`
Since this PR is already relatively large with only the above changes subsequent PRs will do similar operations on relevant metric and pipeline aggregations
Relates to #22868
We previously grouped all the license and notice files for httpcore, httpcore-nio, httpclient and httpasyncclient under the same license and notice file. There were though subtle differences between those which we didn't keep track of. For instance the httpcore license file has slightly changed since 4.4 which we have missed to track.
This commit goes back to having one license and notice file for each jar, to be completely sure that each dependency is associated with exactly the right licene and notice file.
Closes#25567
Using the infra that we now have in place, we can convert the low-level REST client docs so that they extract code snippets from real Java classes. This way we make sure that all the snippets properly compile. Compared to the high level REST client docs, in this case we don't run the tests themselves, as that would require depending on test-framework which requires java 8 while the low-level REST client is compatible with java 7. I think that compiling snippets is enough for now.
This commit changes the parsing logic of DocWriteResponse, ReplicationResponse
and GetResult so that it skips any unknown additional fields (for forward compatibility
reasons). This affects the IndexResponse, UpdateResponse,DeleteResponse and
GetResponse objects.
Removes the `assemble` task from the `build` task when we have
removed `assemble` from the project. We removed `assemble` from
projects that aren't published so our releases will be faster. But
That broke CI because CI builds with `gradle precommit build` and,
it turns out, that `build` includes `check` and `assemble`. With
this change CI will only run `check` for projects without an
`assemble`.
Removes the `assemble` task from projects that are not published.
This should speed up `gradle assemble` by skipping projects that
don't need to be built. Which is useful because `gradle assemble`
is how we cut releases.
This commit adds a NamedXContentProvider interface that can
be implemented by plugins or modules using Java's SPI feature
in order to provide additional NamedXContent parsers to external
applications like the Java High Level Rest Client.
REST handlers that require a body will throw an an ElasticsearchParseException "request body required".
REST handlers that require a body OR source param will throw an ElasticsearchParseException "request body or source param required".
Replaced asserts in BulkRequest parsing code with a more descriptive IllegalArgumentException if the line contains an empty object.
Updated bulk REST test to verify an empty action line is rejected properly.
Updated BulkRequestTests with randomized testing for an empty action line.
Used try-with-resouces for XContentParser in AbstractBulkByQueryRestHandler.
Some response classes in the java api expose both `getTook()` which returns a `TimeValue` and `getTookInMillis` which returns a `long` value. `getTook()` is enough as one can do `getTook().millis()` to obtain the same result as `getTookInMillis()`, which can be removed.
* Adds nodes usage API to monitor usages of actions
The nodes usage API has 2 main endpoints
/_nodes/usage and /_nodes/{nodeIds}/usage return the usage statistics
for all nodes and the specified node(s) respectively.
At the moment only one type of usage statistics is available, the REST
actions usage. This records the number of times each REST action class is
called and when the nodes usage api is called will return a map of rest
action class name to long representing the number of times each of the action
classes has been called.
Still to do:
* [x] Create usage service to store usage statistics
* [x] Record usage in REST layer
* [x] Add Transport Actions
* [x] Add REST Actions
* [x] Tests
* [x] Documentation
* Rafactors UsageService so counts are done by the handlers
* Fixing up docs tests
* Adds a name to all rest actions
* Addresses review comments
Currently a `delete document` request against a non-existing index actually **creates** this index.
With this change the `delete document` no longer creates the previously non-existing index and throws an `index_not_found` exception instead.
However as discussed in https://github.com/elastic/elasticsearch/pull/15451#issuecomment-165772026, if an external version is explicitly used, the current behavior is preserved and the index is still created and the document is marked for deletion.
Fixes#15425
This PR revolves around places in the code where introducing a StringBuilder might make the construction
of a String easier to follow and also, maybe avoid a case where the compiler's very safe way of introducing
StringBuilder instead of String might not always be optimal for performance.
* Add parent-join module
This change adds a new module named `parent-join`.
The goal of this module is to provide a replacement for the `_parent` field but as a first step this change only moves the `has_child`, `has_parent` queries and the `children` aggregation to this module.
These queries and aggregations are no longer in core but they are deployed by default as a module.
Relates #20257
We've had `QueryDSLDocumentationTests` for a while but it had a very
hopeful comment at the top about how we want to make sure that the
example in the query-dsl docs match up with the test but we never
had anything that made *sure* that they did. This changes that!
Now the examples from the query-dsl docs are all built from the
`QueryDSLDocumentationTests`. All except for the percolator example
because that is hard to do as it stands now.
To make this easier this change moves `QueryDSLDocumentationTests`
from core and into the high level rest client. This is useful for
two reasons:
1. We expect the high level rest client to be able to use the builders.
2. The code that builds that docs doesn't check out all of
Elasticsearch. It only checks out certain directories. Since we're
already including snippets from that directory we don't have to
make any changes to that process.
Closes#24320
The one argument ctor for `Script` creates a script with the
default language but most usages of are for testing and either
don't care about the language or are for use with
`MockScriptEngine`. This replaces most usages of the one argument
ctor on `Script` with calls to `ESTestCase#mockScript` to make
it clear that the tests don't need the default scripting language.
I've also factored out some copy and pasted script generation
code into a single place. I would have had to change that code
to use `mockScript` anyway, so it was easier to perform the
refactor.
Relates to #16314
When indexing a document via the bulk API where IDs can be explicitly
specified, we currently accept an empty ID. This is problematic because
such a document can not be obtained via the get API. Instead, we should
rejected these requets as accepting them could be a dangerous form of
leniency. Additionally, we already have a way of specifying
auto-generated IDs and that is to not explicitly specify an ID so we do
not need a second way. This commit rejects the individual requests where
ID is specified but empty.
Relates #24118
The buffer limit should have been configurable already, but the factory constructor is package private so it is truly configurable only from the org.elasticsearch.client package. Also the HttpAsyncResponseConsumerFactory interface was package private, so it could only be implemented from the org.elasticsearch.client package.
Closes#23958
This commit modifies the BulkProcessor to be decoupled from the
client implementation. Instead it just takes a
BiConsumer<BulkRequest, ActionListener<BulkResponse>> that executes
the BulkRequest.
This commit renames the random ASCII helper methods in ESTestCase. This
is because this method ultimately uses the random ASCII methods from
randomized runner, but these methods actually only produce random
strings generated from [a-zA-Z].
Relates #23886
Today the status is lost when parsing back a BulkItemResponse.Failure. This commit changes the BulkItemResponse.Failure parsing method so that it correctly instantiates a failure with the parsed status instead of realying on the parsed ElasticsearchException (that always return an internal server error status).
The current implementation of RestClient.performAsync() methods can throw exceptions before the request is asynchronously executed. Since it only throws unchecked exceptions, it's easy for the user/dev to forget to catch them. Instead I think async methods should never throw exceptions and should always call the listener onFailure() method.
This commit adds support for an info() method to the High Level Rest
client that returns the cluster information usually obtained by performing a
`GET hostname:9200` request.
NamedXContentRegistry will be used by the high level REST client to parse aggregation responses, and any section whose class type depends on a json key. There are currently on entries in the standard registry, but soon aggs and most likely suggesters will be added. Also it is possible for subclasses to provide additional named xcontent parsers.
In #23253 we added an the ability to incrementally reduce search results.
This change exposes the parameter to control the batch since and therefore
the memory consumption of a large search request.
This commit enforces the requirement of Content-Type for the REST layer and removes the deprecated methods in transport
requests and their usages.
While doing this, it turns out that there are many places where *Entity classes are used from the apache http client
libraries and many of these usages did not specify the content type. The methods that do not specify a content type
explicitly have been added to forbidden apis to prevent more of these from entering our code base.
Relates #19388
A previous change aligned the handling of the GET document and HEAD
document APIs. This commit aligns the specification for these two APIs
as well, and fixes a failing test.
Relates #23196
We have a bunch of interfaces that have only a single implementation
for 6 years now. These interfaces are pretty useless from a SW development
perspective and only add unnecessary abstractions. They also require
lots of casting in many places where we expect that there is only one
concrete implementation. This change removes the interfaces, makes
all of the classes final and removes the duplicate `foo` `getFoo` accessors
in favor of `getFoo` from these classes.
This commit adds support for get and exists api to the high level Java Rest Client. It also adds the infrastructure for other methods to be added in the future.
This is related to #22116. Core no longer needs `SocketPermission`
`connect`.
This permission is relegated to these modules/plugins:
- transport-netty4 module
- reindex module
- repository-url module
- discovery-azure-classic plugin
- discovery-ec2 plugin
- discovery-gce plugin
- repository-azure plugin
- repository-gcs plugin
- repository-hdfs plugin
- repository-s3 plugin
And for tests:
- mocksocket jar
- rest client
- httpcore-nio jar
- httpasyncclient jar
This commit upgrades the checkstyle configuration from version 5.9 to
version 7.5, the latest version as of today. The main enhancement
obtained via this upgrade is better detection of redundant modifiers.
Relates #22960
This change adds a strict mode for xcontent parsing on the rest layer. The strict mode will be off by default for 5.x and in a separate commit will be enabled by default for 6.0. The strict mode, which can be enabled by setting `http.content_type.required: true` in 5.x, will require that all incoming rest requests have a valid and supported content type header before the request is dispatched. In the non-strict mode, the Content-Type header will be inspected and if it is not present or not valid, we will continue with auto detection of content like we have done previously.
The content type header is parsed to the matching XContentType value with the only exception being for plain text requests. This value is then passed on with the content bytes so that we can reduce the number of places where we need to auto-detect the content type.
As part of this, many transport requests and builders were updated to provide methods that
accepted the XContentType along with the bytes and the methods that would rely on auto-detection have been deprecated.
In the non-strict mode, deprecation warnings are issued whenever a request with body doesn't provide the Content-Type header.
See #19388
This adds the necessary `AuthCache` needed to support preemptive authorization. By adding every host to the cache, the automatically added `RequestAuthCache` interceptor will add credentials on the first pass rather than waiting to do it after _each_ anonymous request is rejected (thus always sending everything twice when basic auth is required).
There are presently 7 ctor args used in any rest handlers:
* `Settings`: Every handler uses it to initialize a logger and
some other strange things.
* `RestController`: Every handler registers itself with it.
* `ClusterSettings`: Used by `RestClusterGetSettingsAction` to
render the default values for cluster settings.
* `IndexScopedSettings`: Used by `RestGetSettingsAction` to get
the default values for index settings.
* `SettingsFilter`: Used by a few handlers to filter returned
settings so we don't expose stuff like passwords.
* `IndexNameExpressionResolver`: Used by `_cat/indices` to
filter the list of indices.
* `Supplier<DiscoveryNodes>`: Used to fill enrich the response
by handlers that list tasks.
We probably want to reduce these arguments over time but
switching construction away from guice gives us tighter
control over the list of available arguments.
These parameters are passed to plugins using
`ActionPlugin#initRestHandlers` which is expected to build and
return that handlers immediately. This felt simpler than
returning an reference to the ctors given all the different
possible args.
Breaks java plugins by moving rest handlers off of guice.
All the language clients support a special ignore parameter that doesn't get passed to elasticsearch with the request, but used to indicate which error code should not lead to an exception if returned for a specific request.
Moving this to the low level REST client will allow the high level REST client to make use of it too, for instance so that it doesn't have to intercept ResponseExceptions when the get api returns a 404.
This is related to #22116. A number of modules (reindex, etc) use the
rest client. The rest client opens connections using the apache http
client. To avoid throwing SecurityException when using the
SecurityManager these operations must be privileged. This is tricky
because connections are opened within the httpclient code on its
reactor thread. The way I confronted this was to wrap the creation
of the client (and creation of reactor thread) in a doPrivileged
block. The new thread inherits the existing security context.
The RestHighLevelClient class takes as as an argument a low level client instance RestClient. The first method added is ping, which returns true if the call to HEAD / went ok and false if an IOException was thrown. Any other exception gets bubbled up.
There are two kinds of tests, a unit test (RestHighLevelClientTests) that verifies the interaction between high level and low level client, and an integration test (MainActionIT) which relies on an externally started es cluster to send requests to.
This integrates the mocksocket jar with elasticsearch tests. Mocksocket wraps actions requiring SocketPermissions in doPrivilege blocks. This will eventually allow SocketPermissions to be assigned to the mocksocket jar opposed to the entire elasticsearch codebase.
Today we ship with default jvm.options for server Elasticsearch that
prevents Netty from using some unsafe optimizations. Yet, the settings
do nothing for the transport client since it is embedded in other
applications that will not read and use those settings. This commit adds
these settings for the transport client, and is done so in a way that
still enables users to go unsafe if they want to go unsafe (they
shouldn't, but the option is there).
Relates #22284
Not only was StringJoiner unused, it's also a class only available in java 1.8, which is a problem given that the REST client has minimum java required set to 1.7
The warnings get printed out in a single line e.g. WARNING: request [DELETE http://localhost:9200/index/type/_api] returned 3 warnings:[this is warning number 0],[this is warning number 1],[this is warning number 2]
If you try to close the rest client inside one of its callbacks then
it blocks itself. The thread pool switches the status to one that
requests a shutdown and then waits for the pool to shutdown. When
another thread attempts to honor the shutdown request it waits
for all the threads in the pool to finish what they are working on.
Thus thread a is waiting on thread b while thread b is waiting
on thread a. It isn't quite that simple, but it is close.
Relates to #22027
Changes the default socket and connection timeouts for the rest
client from 10 seconds to the more generous 30 seconds.
Defaults reindex-from-remote to those timeouts and make the
timeouts configurable like so:
```
POST _reindex
{
"source": {
"remote": {
"host": "http://otherhost:9200",
"socket_timeout": "1m",
"connect_timeout": "10s"
},
"index": "source",
"query": {
"match": {
"test": "data"
}
}
},
"dest": {
"index": "dest"
}
}
```
Closes#21707
Today there is no way to get notified if a node is disconnected. Client code
must poll the TransportClient constantly to detect that a node is not connected
anymore in order to react and add new nodes or notify altering etc. For instance
if a hostname gets resolved to an IP but that host is disconnected clients want
to reconnect by resolving the hostname again which is a common situation in cloud
environments.
Closes#21424
We kept `netty_3` as a fallback in the 5.x series but now that master
is 6.0 we don't need this or in other words all issues coming up with
netty 4 will be blockers for 6.0.
* Rest client: don't reuse that same HttpAsyncResponseConsumer across multiple retries
Turns out that AbstractAsyncResponseConsumer from apache async http client is stateful and cannot be reused across multiple requests. The failover mechanism was mistakenly reusing that same instance, which can be provided by users, across retries in case nodes are down or return 5xx errors. The downside is that we have to change the signature of two public methods, as HttpAsyncResponseConsumer cannot be provided directly anymore, rather its factory needs to be provided which is going to be used to create one instance of the consumer per request attempt.
Up until now we tested our RestClient against multiple nodes only in a mock environment, where we don't really send http requests. In that scenario we can verify that retries etc. work properly but the interaction with the http client library in a real scenario is different and can catch other problems. With this commit we also add an integration test that sends requests to multiple hosts, and some of them may also get stopped meanwhile. The specific test for pathPrefix was also removed as pathPrefix is now randomly applied by default, hence implicitly tested. Moved also a small test method that checked the validity of the path argument to the unit test RestClientSingleHostTests.
Also increase default buffer limit to 100MB and make it required in default consumer
The default buffer limit used to be 10MB but that proved not to be high enough for scroll requests (see reindex from remote). With this commit we increase the limit to 100MB and make it a bit more visibile in the consumer factory.
It was 10mb and that was causing trouble when folks reindex-from-remoted
with large documents.
We also improve the error reporting so it tells folks to use a smaller
batch size if they hit a buffer size exception. Finally, adds some docs
to reindex-from-remote mentioning the buffer and giving an example of
lowering the size.
Closes#21185
Lucene 6.3 is expected to be released in the next weeks so it'd be good to give
it some integration testing. I had to upgrade randomized-testing too so that
both Lucene and Elasticsearch are on the same version.
This change proposes the removal of all non-tcp transport implementations. The
mock transport can be used by default to run tests instead of local transport that has
roughly the same performance compared to TCP or at least not noticeably slower.
This is a master only change, deprecation notice in 5.x will be committed as a
separate change.
Today when parsing a request, Elasticsearch silently ignores incorrect
(including parameters with typos) or unused parameters. This is bad as
it leads to requests having unintended behavior (e.g., if a user hits
the _analyze API and misspell the "tokenizer" then Elasticsearch will
just use the standard analyzer, completely against intentions).
This commit removes lenient URL parameter parsing. The strategy is
simple: when a request is handled and a parameter is touched, we mark it
as such. Before the request is actually executed, we check to ensure
that all parameters have been consumed. If there are remaining
parameters yet to be consumed, we fail the request with a list of the
unconsumed parameters. An exception has to be made for parameters that
format the response (as opposed to controlling the request); for this
case, handlers are able to provide a list of parameters that should be
excluded from tripping the unconsumed parameters check because those
parameters will be used in formatting the response.
Additionally, some inconsistencies between the parameters in the code
and in the docs are corrected.
Relates #20722
* Build: Remove old maven deploy support
This change removes the old maven deploy that we have in parallel to
maven-publish, and makes maven-publish fully work with publishing to
maven local. Using `gradle publishToMavenLocal` should be used to
publish to .m2.
Note that there is an unfortunate hack that means for
zip artifacts we must first create/publish a dummy pom file, and then
follow that with the real pom file. It would be nice to have the pom
file contains packaging=zip, but maven central then requires sources and
javadocs. But our zips are really just attached artifacts, so we already
set the packaging type to pom for our zip files. This change just works
around a limitation of the underlying maven publishing library which
silently skips attached artifacts when the packaging type is set to pom.
relates #20164closes#20375
* Remove unnecessary extra spacing
This change replaces the fields parameter with stored_fields when it makes sense.
This is dictated by the renaming we made in #18943 for the search API.
The following list of endpoint has been changed to use `stored_fields` instead of `fields`:
* get
* mget
* explain
The documentation and the rest API spec has been updated to cope with the changes for the following APIs:
* delete_by_query
* get
* mget
* explain
The `fields` parameter has been deprecated for the following APIs (it is replaced by _source filtering):
* update: the fields are extracted from the _source directly.
* bulk: the fields parameter is used but fields are extracted from the source directly so it is allowed to have non-stored fields.
Some APIs still have the `fields` parameter for various reasons:
* cat.fielddata: the fields paramaters relates to the fielddata fields that should be printed.
* indices.clear_cache: used to indicate which fielddata fields should be cleared.
* indices.get_field_mapping: used to filter fields in the mapping.
* indices.stats: get stats on fields (stored or not stored).
* termvectors: fields are retrieved from the stored fields if possible and extracted from the _source otherwise.
* mtermvectors:
* nodes.stats: the fields parameter is used to concatenate completion_fields and fielddata_fields so it's not related to stored_fields at all.
Fixes#20155
* master:
Increase visibility of deprecation logger
Skip transport client plugin installed on JDK 9
Explicitly disable Netty key set replacement
percolator: Fail indexing percolator queries containing either a has_child or has_parent query.
Make it possible for Ingest Processors to access AnalysisRegistry
Allow RestClient to send array-based headers
Silence rest util tests until the bogusness can be simplified
Remove unknown HttpContext-based test as it fails unpredictably on different JVMs
Tests: Improve rest suite names and generated test names for docs tests
Add support for a RestClient base path
This commit adds an assumption to
PreBuiltTransportClientTests#testPluginInstalled on JDK 9. The
underlying issue is that Netty attempts to access sun.nio.ch but this
package is not exported from java.base on JDK 9. This throws an uncaught
InaccessibleObjectException causing the test to fail. This assumption
can be removed when Netty 4.1.6 is released as it will include a fix for
this scenario.
Relates #20251
This enables the RestClient to send array-based (multi-valued) header values, rather than only sending whatever happened to be the _last_ value of the header.
This enables simple support for proxies (beyond proxy host and proxy port, which is done via the RequestConfig)) to provide a base path in front of all requests performed by the RestClient.
This removes final from the RestClient, Response, and Sniffer classes so that outside code can mock them. Their constructors are already package private, so there's not much that can go wrong.
Today when we load the Netty plugins, we indirectly cause several Netty
classes to initialize. This is because we attempt to load some classes
by name, and loading these classes is done in a way that triggers a long
chain of class initializers within Netty. We should not do this, this
can lead to log messages before the logger is loader, and it leads to
initialization in cases when the classes would never be needed (for
example, Netty 3 class initialization is never needed if Netty 4 is
used, and vice versa). This commit avoids this early initialization of
these classes by removing the need for the early loading.
Relates #19819
This commit updates Jackson to the 2.8.1 version, which is more strict when it comes to build objects. It also adds the snakeyaml dependency that was previously shaded in jackson libs.
It also closes#18076
When closing a transport client that depends on Netty 4, interrupted
exceptions can be thrown while shutting down some Netty threads. This
commit refactors the handling of these exceptions to finish shutting
down and then just restore the interrupted status.
Today if the PreBuiltTransportClient is using Netty 4 transport, on
shutdown some Netty 4 threads could linger. This commit causes the
client to wait for these threads to shutdown upon termination.
This change does three things:
1. Makes PreBuiltTransportClientTests run since it was silently
failing on a missing dependency
2. Makes PreBuiltTransportClientTests pass
3. Removes the http.type and transport.type from being set in the
transport clients additional settings since these are set to `netty4` by
default anyway.
* Allow to run client benchmark as an uberjar
* Busy wait to avoid accidental skew on low target throughput rates
* Trigger and wait for full GC to happen between trials
* Add missing SuppressForbidden to allow System.gc in client benchmark
Consuming the response body to make it part of the exception message means that it may not be readable anymore later, depending on whether the entity is repeatable or not. Turns out that the response body tells a lot about the error itself, and considering that we don't expect bodies to be incredibly big for errors, we can wrap the entity into a BufferedHttpEntity to make it repeatable.
Closes#19622
Simplify Sniffer initialization and automatically create the default HostsSniffer
Take Sniffer.Builder out to its own top level class. Remove HostsSniffer.Builder and let SnifferBuilder create the default HostsSniffer. This simplifies the Sniffer initialization as the HostsSniffer is not mandatory anymore. It can still be specified though in case the configuration needs to be changed or a different impl has to be used. Also make HostsSniffer an interface.
It can happen that the list of healthy hosts is empty, then we get one from the blacklist. but some other operation might have sneaked in and emptied the blacklist in the meantime, so we have to retry till we manage to get some host, either from the healthy list or from the blacklist.
Throw explicit IllegalStateException in unexpected situations, like where both response and exception are set, or when both are unset. Add unit test for SyncResponseListener.
We throw IOException, which is the exception that is going to be thrown in 99% of the cases. A more generic exception can happen, and if it is a runtime one we just let it bubble up as is, otherwise we wrap it into runtime one so that we don't require to catch Exception everywhere, which seems odd.
Also adjusted javadocs for all performRequest methods
We keep the default async client behaviour like in BasicAsyncResponseConsumer, but we lower the maximum size of the buffer from Integer.MAX_VALUE (2GB) to 10 MB. This way users will realize they are buffering big responses in heap hence they'll know they have to do something about it, either write their own response consumer or increase the buffer size limit by providing their manually creeted instance of HeapBufferedAsyncResponseConsumer (constructor accept a bufferLimit int argument).
Also delayed call to HttpAsyncClient#start so that if something goes wrong while creating the RestClient, the http client threads don't linger. In fact, if the constructor fails it is not possible to call close against the RestClient.
HttpClientConfigCallback#customizeHttpClient now also returns the HttpClientBuilder so it can be completely replaced
RequestConfigCallback#customizeRequestConfig now also returns the HttpClientBuilder so it can be completely replaced
The new method accepts the usual parameters (method, endpoint, params, entity and headers) plus a response listener and an async response consumer. Shortcut methods are also added that don't require params, entity and the async response consumer optional.
There are a few relevant api changes as a consequence of the move to async client that affect sync methods:
- Response doesn't implement Closeable anymore, responses don't need to be closed
- performRequest throws Exception rather than just IOException, as that is the the exception that we get from the FutureCallback#failed method in the async http client
- ssl configuration is a bit simpler, one only needs to call setSSLStrategy from a custom HttpClientConfigCallback, that doesn't end up overridng any other default around connection pooling (it used to happen with the sync client and make ssl configuration more complex)
Relates to #19055
The `client/transport` project adds a new jar build project that
pulls in all dependencies and configures all required modules.
Preinstalled modules are:
* transport-netty
* lang-mustache
* reindex
* percolator
The `TransportClient` classes are still in core
while `TransportClient.Builder` has only a protected construcutor
such that users are redirected to use the new `TransportClientBuilder`
from the new jar.
Closes#19412
The callback replaces the ability to fully replace the http client instance. By doing that, one used to lose any default that the RestClient had set for the underlying http client. Given that you'd usually override one or two things only, like a couple of timeout values, the ssl factory or the default credentials providers, it is not uder friendly if by doing that users end up replacing the whole http client instance and lose any default set by us.
Users wanting to send a request by providing only its method and endpoint, effectively the only two required arguments, shouldn't need to pass in an empty map and a null entity for the body. While at it we can also add a variant to send requests by specifying only method, endpoint and params, but not body. Headers remain a vararg as last argument, so they can always optionally be provided.
Closes#19312
The assumption is HostsSniffer is that all of the arguments have been properly provided and validated through HostsSniffer.Builder, except they weren't, as the scheme didn't have a default value and when not set would cause NPEs down the road. Improved tests to catch this also.
Some Rest tests use the Sun HTTP server which has lingering threads after shutdown. Similar to ESTestCase, this adds an
option to wait 5 seconds for these threads to terminate.
:client ---------> :client:rest
:client-sniffer -> :client:sniffer
:client-test ----> :client:test
This lines the client up with how we do things like modules and
plugins.
The lucene-test dependency caused issues with IDEs as they would always load the lucene 5 jar although they shouldn't have, which caused jarhell in es core tests.
If we depend directly on randomized runner we don't have this problem. It is luckily still compatible with java 1.7. This requires though adding a thin module that includes the base test class which can be shared between client and client-sniffer.
Projects that don't depend on elasticsearch-test fail otherwise because org.elasticsearch.test.EsIntegTestCase (default integ test class) is not in the classpath. They should provide their onw integ test base class, but having integration tests should not be mandatory. One can simply set skipIntegTestsInDisguise to true to prevent loading of integ test class.
Unit tests rely on mockito to mock the internal HttpClient instance. No http request is performed, we only simulate interaction between RestClient and its internal HttpClient.
Store default headers ourselves instead, otherwise default ones cannot be replaced. Don't allow for multiple headers with same key, last one wins and replaces previous ones with same key.
Also fail with null params or headers.
Although elasticsearch doesn't support these methods (RestController doesn't even allow to register handler for them), the RestClient should allow to send requests using them.
Use sun HttpServer instead and disable forbidden-apis for test classes. It turns out to be more flexible than okhttp as it allows get & delete with body.
Create a new subproject called client-sniffer that contains the o.e.client.sniff package. Since it is going to go to a separate jar, due to its additional functionalities and dependency on jackson, it makes sense to have it as a separate project that depends on client. This way we make sure that client doesn't depend on it etc.
ElasticsearchResponseException, as well as ElasticsearchResponse, should only be created from o.e.client package.
RequestLogger should only be used from this package too.
Instead of having a Connection mutable object that holds the state of the connection to each host, we now have immutable objects only. We keep two sets, one with all the hosts, one with the blacklisted ones. Once we blacklist a host we associate it with a DeadHostState which keeps track of the number of failed attempts and when the host should be retried. A new state object is created each and every time the state of the host needs to be updated.
We still have a wrapper called RestTestClient that is very specific to Rest tests, as well as RestTestResponse etc. but all the low level bits around http connections etc. are now handled by RestClient.
The only small problem is that the response gets closed straightaway and its body read immediately into a string. Should be ok to load it all into memory eagerly though in case of errors. Otherwise it becomes cumbersome to have an exception implement Closeable...
The connection class can be greatly simplified now that we don't ping anymore. Pings required a special initial state (UNKNOWN) for connections, to indicate that they require pinging although they are not dead. At this point we don't need the State enum anymore, as connections can only be dead or alive based on the number of failed attempts. markResurrected is also not needed, as it was again a way to make pings required. RestClient can simply pick a dead connection now and use it, no need to change its state when picking the connection.
Given that we don't use streams anymore, we can check straightaway if the connection iterator is empty before returning it and resurrect a connection when needed directly in the connection pool, no lastResortConnection method required.
There are two implementations of connection pool, a static one that allows to enable/disable pings, and a sniffing one that sniffs nodes from the nodes info api.
Transport retrieves a stream of connections from the connection for each request and calls onSuccess or onFailure depending on the result of the request.
Transport also supports a max retry timeout to control the timeout for the request retries overall.