This change makes the request builder code-path same as `Client#execute`. The request builder used to return a `ListenableActionFuture` when calling execute, which allows to associate listeners with the returned future. For async execution though it is recommended to use the `execute` method that accepts an `ActionListener`, like users would do when using `Client#execute`.
Relates to #24412
Relates to #9201
Async shard fetching only uses the node id to correlate responses to requests. This can lead to a situation where a response from an earlier request is mistaken as response from a new request when a node is restarted. This commit adds unique round ids to correlate responses to requests.
We often want the JVM arguments used for a running instance of
Elasticsearch. It sure would be nice if these came as part of the nodes
API, or any API that includes JVM info. This commit causes these
arguments to be displayed.
Relates #24450
These can be very useful to have, let's have them at our fingertips in
the logs (in case the JVM dies and we can no longer retrieve them, at
least they are here).
Relates #24451
* Fix JVM test argline
The argline was being overridden by '-XX:-OmitStackTraceInFastThrow'
which led to test failures that were expecting the JVM to be in a
certain state based on the value of tests.jvm.argline but they were not
since these arguments were never passed to the JVM. Additionally, we
need to respect the provided JVM argline if it is already provided with
a flag for OmitStackTraceInFastThrow. This commit fixes this by only
setting OmitStackTraceInFastThrow if it is not already set.
* Add comment
* Fix comment
* More elaborate comment
* Sigh
We've had `QueryDSLDocumentationTests` for a while but it had a very
hopeful comment at the top about how we want to make sure that the
example in the query-dsl docs match up with the test but we never
had anything that made *sure* that they did. This changes that!
Now the examples from the query-dsl docs are all built from the
`QueryDSLDocumentationTests`. All except for the percolator example
because that is hard to do as it stands now.
To make this easier this change moves `QueryDSLDocumentationTests`
from core and into the high level rest client. This is useful for
two reasons:
1. We expect the high level rest client to be able to use the builders.
2. The code that builds that docs doesn't check out all of
Elasticsearch. It only checks out certain directories. Since we're
already including snippets from that directory we don't have to
make any changes to that process.
Closes#24320
TransportService and RemoteClusterService are closely coupled already today
and to simplify remote cluster integration down the road it can be a direct
dependency of TransportService. This change moves RemoteClusterService into
TransportService with the goal to make it a hidden implementation detail
of TransportService in followup changes.
The license header in this file was after and on the same line as the
package statement. This commit moves the package statement to be after
the license header.
This adds `-XX:-OmitStackTraceInFastThrow` to the JVM arguments
which *should* prevent the JVM from omitting stack traces on
common exception sites. Even though these sites are common, we'd
still like the exceptions to debug them.
This also adds the flag when running tests and adapts some tests
that had workarounds for the absense of the flag.
Closes#24376
RemoteClusterService is an internal service that should not necessarily be exposed
to plugins or other parts of the system. Yet, for cluster name parsing for instance
it is crucial to reuse some code that is used for the RemoteClusterService. This
change extracts a base class that allows to share the settings related code as well
as cluster settings updates to `search.remote.*` to be observed by other services.
* Fixes#24259
Corrects the ScriptedMetricAggregator so that the script can have
access to scores during the map stage.
* Restored original tests. Added seperate test.
As requested, I've restored the non-score dependant tests, and added the
score dependent metric as a seperate test.
Netty uses the number of processors for sizing various resources (e.g.,
thread pools, buffer pools, etc.). However, it uses the runtime number
of available processors which might not match the configured number of
processors as set in Elasticsearch to limit the number of threads (for
example, in Docker containers). A new feature was added to Netty that
enables configuring the number of processors Netty should see for sizing
this various resources. This commit takes advantage of this feature to
set this number of available processors to be equal to the configured
number of processors set in Elasticsearch.
Relates #24420
* Fix wrong delegation to constructors when compiling lambdas with method references to ctors. Also remove the get$lambda factory.
* Cleanup code and remove unneeded transformations between binary and internal class names (uses ASM Type class instead)
* Cleanup Exception handling
* Simplification by moving the type adaption to the outside
* Remove STATIC access flag from our Lambda class (not required and also officially not allowed)
* Move the lambda counter to the classloader, so we have a per-script lambda ID
* Change Codesource of generated lambdas to be consistent
The _cat/shards docs asserted that one of the columns looked like
a propery byte size but used a regex like `\d+\.\d+.*` which doesn't
match `0b` which is a possible value. Instead this uses
`\d(\.\d+)?[kmg]?b`.
The response is attempting to illustrate the sync_id marker, but in
the test the index is too "fresh" to have a sync marker. So the test
needs to execute a sync flush behind the scenes so that the marker
is present
In #22267, we introduced the notion of incompatible snapshots in a
repository, and they were stored in a root-level blob named
`incompatible-snapshots`. If there were no incompatible snapshots in
the repository, then there was no `incompatible-snapshots` blob.
However, this causes a problem for some cloud-based repositories,
because if the blob does not exist, the cloud-based repositories may
attempt to keep retrying the read of a non-existent blob with
expontential backoff until giving up. This causes performance issues
(and potential timeouts) on snapshot operations because getting the
`incompatible-snapshots` is part of getting the repository data (see
RepositoryData#getRepositoryData()).
This commit fixes the issue by creating an empty
`incompatible-snapshots` blob in the repository if one does not exist.
This method has to do with how the transport action may or may not resolve wildcards expressions to aliases names. It is only needed in TransportIndicesAliasesAction and for this reason it should be a private method in it rather than part of a request class which is also part of the Java API and later in the high level REST client.
This commit cleans up some cases where a list or map was being
constructed, and then an existing collection was copied into the new
collection. The clean is to instead use an appropriate constructor to
directly copy the existing collection in during collection
construction. The advantage of this is that the new collection is sized
appropriately.
Relates #24409
This commit removes a few duplicates from the list of checkstyle
suppressions that accumulated while auto-generating the list of
suppressions based on the files that currently exceed the 140-column
limit.
The list of checkstyle suppressions was auto-generated based on the list
of files that currently violate the 140-column limit. An errant entry
was committed that did no harm, but this commit removes it.
We are back to having a 140-column limit. While at some point we will
apply auto-formatting tools to the code base, that is a down-the-road
thing and adjusting the suppressions now shaves minutes off the build
time.
Relates #24408
With #24236, the master now uses the pending queue when publishing to itself. This means that a cluster state update is put into the pending queue,
then sent to the ClusterApplierService to be applied. After it has been applied, it is marked as processed and removed from the pending queue.
ensureGreen is implemented as a cluster health action that waits on certain conditions, which will lead to a cluster state update task to be submitted
on the master. When this task gets to run and the conditions are not satisfied yet, it register a cluster state observer. This observer is registered
on the ClusterApplierService and waits on cluster state change events. ClusterApplierService first notifies the observer and then the discovery
layer. This means that there is a small time frame where ensureGreen can complete and call the node stats to find the pending queue still containing
the last cluster state update.
Closes#24388
Failure detection should only be updated in ZenDiscovery after the current state has been updated to prevent a race condition
with handleLeaveRequest and handleNodeFailure as those check the current state to determine whether the failure is to be handled by this node.
Open/close index API when executed providing an index expressions that matched no indices, threw an error even when allow_no_indices was set to true. The APIs should rather honour the option and behave as a no-op in that case.
Closes#24031
Currently we don't write the count value to the geo_centroid aggregation rest response,
but it is provided via the java api and the count() method in the GeoCentroid interface.
We should add this parameter to the rest output and also provide it via the getProperty()
method.
Eclipse doesn't allow extra semicolons after an import statement:
```
import foo.Bar;; // <-- syntax error!
```
Here is the Eclipse bug:
https://bugs.eclipse.org/bugs/show_bug.cgi?id=425140
which the Eclipse folks closed as "the spec doesn't allow these
semicolons so why should we?" Which is fair. Here is the bug
against javac for allowing them:
https://bugs.openjdk.java.net/browse/JDK-8027682
which hasn't been touched since 2013 without explanation. There
is, however, a rather educations mailing list thread:
http://mail.openjdk.java.net/pipermail/compiler-dev/2013-August/006956.html
which contains gems like, "In general, it is better/simpler to
change javac to conform to the spec. (Except when it is not.)"
I suspect the reason this hasn't been fixed is:
```
FWIW, if we change javac such that the set of programs accepted by javac
is changed, we have an process (currently Oracle internal) to get
approval for such a change. So, we would not simply change javac on a
whim to meet the spec; we would at least have other eyes looking at the
behavioral change to determine if it is "acceptable".
```
from http://mail.openjdk.java.net/pipermail/compiler-dev/2013-August/006973.html
The alias parameter was documented as a list in our rest-spec, yet only the first value out of a list was getting read and processed. This commit adds support for multiple aliases to _cat/aliases
Closes#23661
The version on an update request is a syntactic sugar
for get of a specific version, doc merge and a version
index. This changes it to reject requests with both
upsert and a version.
If the upsert index request is versioned, we also
reject the op.
The previous commit (35f78d098a) introduced an assertion in ZenDiscovery that was overly restrictive - it could trip when a cluster state that was
successfully published would not be applied locally because a master with a better cluster state came along in the meantime.