It was using the wrong version, which can cause errors like
```
1> java.security.AccessControlException: access denied ("java.net.SocketPermission" "[0:0:0:0:0:0:0:1]:34221" "connect,resolve")
1> at java.security.AccessControlContext.checkPermission(AccessControlContext.java:472) ~[?:1.8.0_111]
1> at java.security.AccessController.checkPermission(AccessController.java:884) ~[?:1.8.0_111]
1> at java.lang.SecurityManager.checkPermission(SecurityManager.java:549) ~[?:1.8.0_111]
1> at java.lang.SecurityManager.checkConnect(SecurityManager.java:1051) ~[?:1.8.0_111]
1> at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:625) ~[?:?]
1> at org.apache.http.impl.nio.reactor.DefaultConnectingIOReactor.processSessionRequests(DefaultConnectingIOReactor.java:273) ~[httpcore-nio-4.4.5.jar:4.4.5]
1> at org.apache.http.impl.nio.reactor.DefaultConnectingIOReactor.processEvents(DefaultConnectingIOReactor.java:139) ~[httpcore-nio-4.4.5.jar:4.4.5]
1> at org.apache.http.impl.nio.reactor.AbstractMultiworkerIOReactor.execute(AbstractMultiworkerIOReactor.java:348) ~[httpcore-nio-4.4.5.jar:4.4.5]
1> at org.apache.http.impl.nio.conn.PoolingNHttpClientConnectionManager.execute(PoolingNHttpClientConnectionManager.java:192) ~[httpasyncclient-4.1.2.jar:4.1.2]
1> at org.apache.http.impl.nio.client.CloseableHttpAsyncClientBase$1.run(CloseableHttpAsyncClientBase.java:64) ~[httpasyncclient-4.1.2.jar:4.1.2]
```
When running tests
If we don't do this, although we don't set the subAggsSupplier again, we will reuse the one set in the previous run, hence we can end up in situations where we keep on generating sub aggregations till a StackOverflowError gets thrown.
This commit terminates any controller processes plugins might have after
the node has been closed. This gives the plugins a chance to shut down their
controllers gracefully.
Previously there was a race condition where controller processes could be shut
down gracefully and terminated by two threads running in parallel, leading to
non-deterministic outcomes.
Additionally, controller processes that failed to shut down gracefully were
not forcibly terminated when running as a Windows service; there was a reliance
on the plugin to shut down its controller gracefully in this situation.
This commit also fixes this problem.
The max concurrent searches logic is complex and we shouldn't duplicate that in multi search template api,
so we should template each individual template search request and then delegate to multi search api.
The max concurrent searches logic is complex and we shouldn't duplicate that in multi search template api,
so we should template each individual template search request and then delegate to multi search api.
Adds a new "icu_collation" field type that exposes lucene's
ICUCollationDocValuesField. ICUCollationDocValuesField is the replacement
for ICUCollationKeyFilter which has been deprecated since Lucene 5.
This commit renames ScriptEngineService to ScriptEngine. It is often
confusing because we have the ScriptService, and then
ScriptEngineService implementations, but the latter are not services as
we see in other places in elasticsearch.
File scripts have 2 related settings: the path of file scripts, and
whether they can be dynamically reloaded. This commit deprecates those
settings.
relates #21798
A constraint on the global checkpoint was inadvertently committed from
the inlining global checkpoint work. Namely, the constraint prevents the
global checkpoint from advancing to no ops performed, a situation that
can occur when shards are started but empty.
Today we rely on background syncs to relay the global checkpoint under
the mandate of the primary to its replicas. This means that the global
checkpoint on a replica can lag far behind the primary. The commit moves
to inlining global checkpoints with replication requests. When a
replication operation is performed, the primary will send the latest
global checkpoint inline with the replica requests. This keeps the
replicas closer in-sync with the primary.
However, consider a replication request that is not followed by another
replication request for an indefinite period of time. When the replicas
respond to the primary with their local checkpoint, the primary will
advance its global checkpoint. During this indefinite period of time,
the replicas will not be notified of the advanced global
checkpoint. This necessitates a need for another sync. To achieve this,
we perform a global checkpoint sync when a shard falls idle.
Relates #24513
AggregationsTests#testFromXContent verifies that parsing of aggregations works by combining multiple aggs at the same level, and also adding sub-aggregations to multi bucket and single bucket aggs, up to a maximum depth of 5.
This changes the way we register pre-configured token filters so that
plugins can declare them and starts to move all of the pre-configured
token filters out of core. It doesn't finish the job because doing
so would make the change unreviewably large. So this PR includes
a shim that keeps the "old" way of registering pre-configured token
filters around.
The Lowercase token filter is special because there is a "special"
interaction between it and the lowercase tokenizer. I'm not sure
exactly what to do about it so for now I'm leaving it alone with
the intent of figuring out what to do with it in a followup.
This also renames these pre-configured token filters from
"pre-built" to "pre-configured" because that seemed like a more
descriptive name.
This is a part of #23658
This test waited 10 seconds for a refresh listener to appear in
the stats. It turns out that in our NFS testing infrastructure this can
take a lot longer than 10 seconds. The error reported here:
https://elasticsearch-ci.elastic.co/job/elastic+elasticsearch+master+nfs/257/consoleFull
has it taking something like 15 seconds. This bumps the timeout
to a solid minute.
Closes#24417
Now that indices have a single type by default, we can move to the next step
and identify documents using their `_id` rather than the `_uid`.
One notable change in this commit is that I made deletions implicitly create
types. This helps with the live version map in the case that documents are
deleted before the first type is introduced. Otherwise there would be no way
to differenciate `DELETE index/foo/1` followed by `PUT index/foo/1` from
`DELETE index/bar/1` followed by `PUT index/foo/1`, even though those are
different if versioning is involved.
If a node in version >= 5.3 acts as a coordinating node during a scroll request that targets a single shard, the scroll may return the same documents over and over iff the targeted shard is hosted by a node with a version <= 5.3.
The nodes in this version will advance the scroll only if the search_type has been set to `query_and_fetch` though this search type has been removed in 5.3.
This change handles this situation by adding the removed search_type in the request that targets a node in version <= 5.3.
Previously, Mustache would call `toString` on the `_ingest.timestamp`
field and return a date format that did not match Elasticsearch's
defaults for date-mapping parsing. The new ZonedDateTime class in Java 8
happens to do format itself in the same way ES is expecting.
This commit adds support for a feature flag that enables the usage of this new date format
that has more native behavior.
Fixes#23168.
This new fix can be found in the form of a cluster setting called
`ingest.new_date_format`. By default, in 5.x, the existing behavior
will remain the same. One will set this property to `true` in order to
take advantage of this update for ingest-pipeline convenience.
This commit fixes inefficient (worst case exponential) loading of
snapshot repository data when checking for incompatible snapshots,
that was introduced in #22267. When getting snapshot information,
getRepositoryData() was called on every snapshot, so if there are
a large number of snapshots in the repository and _all snapshots
were requested, the performance degraded exponentially. This
commit fixes the issue by only calling getRepositoryData once and
using the data from it in all subsequent calls to get snapshot
information.
Closes#24509
When starting a new replication group in an index level replication test
case, a started replica would not have a valid recovery state. This
violates simple assumptions as replicas always have to have recovered
before being started. This commit causes this to be the case that this
assumption is not violated too.
We previously removed this assertion because it could be violated in
races. This commit adds this assertion back with sampling done more
carefully to avoid failures solely due to race conditions.