```
> Throwable #1: java.lang.RuntimeException: file handle leaks: [SeekableByteChannel(/var/lib/jenkins/workspace/elastic+elasticsearch+master+g1gc/core/build/testrun/integTest/J0/temp/org.elasticsearch.search.suggest.CompletionSuggestSearch2xIT_518545A20D129C8C-001/tempDir-001/data/nodes/1/indices/4sTECv6WSJOJsw9L4CGamg/0/index/segments_1), SeekableByteChannel(/var/lib/jenkins/workspace/elastic+elasticsearch+master+g1gc/core/build/testrun/integTest/J0/temp/org.elasticsearch.search.suggest.CompletionSuggestSearch2xIT_518545A20D129C8C-001/tempDir-001/data/nodes/1/indices/4sTECv6WSJOJsw9L4CGamg/0/index/segments_1)]
> at __randomizedtesting.SeedInfo.seed([518545A20D129C8C]:0)
> at org.apache.lucene.mockfile.LeakFS.onClose(LeakFS.java:63)
> at org.apache.lucene.mockfile.FilterFileSystem.close(FilterFileSystem.java:77)
> at org.apache.lucene.mockfile.FilterFileSystem.close(FilterFileSystem.java:78)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.Exception
> at org.apache.lucene.mockfile.LeakFS.onOpen(LeakFS.java:46)
> at org.apache.lucene.mockfile.HandleTrackingFS.callOpenHook(HandleTrackingFS.java:81)
> at org.apache.lucene.mockfile.HandleTrackingFS.newByteChannel(HandleTrackingFS.java:271)
> at org.apache.lucene.mockfile.FilterFileSystemProvider.newByteChannel(FilterFileSystemProvider.java:212)
> at org.apache.lucene.mockfile.HandleTrackingFS.newByteChannel(HandleTrackingFS.java:240)
> at java.nio.file.Files.newByteChannel(Files.java:361)
> at java.nio.file.Files.newByteChannel(Files.java:407)
> at org.apache.lucene.store.SimpleFSDirectory.openInput(SimpleFSDirectory.java:77)
> at org.apache.lucene.store.FilterDirectory.openInput(FilterDirectory.java:94)
> at org.apache.lucene.util.LuceneTestCase.slowFileExists(LuceneTestCase.java:2695)
> at org.apache.lucene.store.MockDirectoryWrapper.openInput(MockDirectoryWrapper.java:737)
> at org.apache.lucene.store.FilterDirectory.openInput(FilterDirectory.java:94)
> at org.elasticsearch.common.lucene.Lucene$1.doBody(Lucene.java:237)
> at org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:685)
> at org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:637)
> at org.elasticsearch.common.lucene.Lucene.checkSegmentInfoIntegrity(Lucene.java:242)
> at org.elasticsearch.index.store.Store$MetadataSnapshot.loadMetadata(Store.java:847)
> at org.elasticsearch.index.store.Store$MetadataSnapshot.<init>(Store.java:740)
> at org.elasticsearch.index.store.Store.getMetadata(Store.java:260)
> at org.elasticsearch.index.store.Store.getMetadata(Store.java:240)
> at org.elasticsearch.index.shard.IndexShard.doCheckIndex(IndexShard.java:1310)
> at org.elasticsearch.common.util.CancellableThreads.executeIO(CancellableThreads.java:102)
> at org.elasticsearch.index.shard.IndexShard.checkIndex(IndexShard.java:1288)
> at org.elasticsearch.index.shard.IndexShard.internalPerformTranslogRecovery(IndexShard.java:921)
> at org.elasticsearch.index.shard.IndexShard.skipTranslogRecovery(IndexShard.java:964)
> at org.elasticsearch.indices.recovery.RecoveryTarget.prepareForTranslogOperations(RecoveryTarget.java:297)
> at
```
The MMapDirectory has a switch that allows the content of files to be loaded
into the filesystem cache upon opening. This commit exposes it with the new
`index.store.pre_load` setting.
Today we only emit that the setting wasn't found unless we have
some DYM suggestions. Yet, if a setting is not found at all and there
are no suggestions due to typos it's likely a removed setting or the plugin
that is supposed to be configured is not installed.
This commit adds some info text to the exception to help the user debugging
the problem before opening bugreports.
Instead of emitting:
`unknown setting [foo.bar]`
we now emit:
`unknown setting [foo.bar] please check the migration guide for removed settings and ensure that the plugin you are configuring is installed`
Relates to #18663
If someone sets `index.shard.check_on_startup`, indexing start up time can be slow (by design, it diligently goes and checks all data). If for some reason the shard is closed in that time, the store ref is kept around and prevents a new shard copy to be allocated to this node via the shard level locks. This is especially tricky if the shard is close due to a cancelled recovery which may re-restart soon.
This commit adds a cancellable threads instance to each IndexShard and perform index checking underneath it, so it can be cancelled on close.
We pretended to be able to ackt like a different version node for so long it's
time to be honest and remove this ability. It's just confusing and where needed
and tested we should build dedicated extension points.
This commit introduce unit testing infrastructure to test replication operations using real index shards. This is infra is complementary to the full integration tests and unit testing of ReplicationOperation we already have. The new ESIndexLevelReplicationTestCase base makes it easier to test and simulate failure mode that require real shards and but do not need the full blow stack of a complete node.
The commit also add a simple "nothing is wrong" test plus a test that checks we don't drop docs during the various stages of recovery.
For now, only single doc indexing is supported but this can be easily extended in the future.
This class was forked in 0.20 to remove a volatile keyword. While there
is no issue attached to the commit, no evidence of the criticality of the
change nor does it seem to be correct since we set this value internally as well
I think this class should be used as is from joda time even if we have to pay
the price of volatile reads. We can't do 3rd party optimization in our codebase that
way it just not maintainable.
This was added in 2280915d3c
This commit removes the search preference _only_node as the same
functionality can be obtained by using the search preference
_only_nodes. This commit also adds a test that ensures that _only_nodes
will continue to support specifying node IDs.
Relates #18875
In the past, we had the semantics where the very first cluster state a node processed after joining could not contain shard assignment to it. This was to make sure the node cleans up local / stale shard copies before receiving new ones that might confuse it. Since then a lot of work in this area, most notably the introduction of allocation ids and #17270 . This means we don't have to be careful and just reroute in the same cluster state change where we process the join, keeping things simple and following the same pattern we have in other places.
This change removes some unnecessary dependencies from ClusterService
and cleans up ClusterName creation. ClusterService is now not created
by guice anymore.
The NodeJoinController is responsible for processing joins from nodes, both normally and during master election. For both use cases, the class processes incoming joins in batches in order to be efficient and to accumulated enough joins (i.e., >= min_master_nodes) to seal an election and ensure the new cluster state can be committed. Since the class was written, we introduced a new infrastructure to support batch changes to the cluster state at the `ClusterService` level. This commit rewrites NodeJoinController to use that infra and be simpler.
The PR also introduces a new concept to ClusterService allowing to submit tasks in batches, guaranteeing that all tasks submitted in a batch will be processed together (potentially with more tasks). On top of that I added some extra safety checks to the ClusterService, around potential double submission of task objects into the queue.
This is done in preparation to revive #17811
Today we have a push model for registering basically anything. All our extension points
are defined on modules which we pass in to plugins. This is harder to maintain and adds
unnecessary dependencies on the modules itself. This change moves towards a pull model
where the plugin offers a getter kind of method to get the extensions. This will also
help in the future if we need to pass dependencies to the extension points which can
easily be defined on the method as arguments if a pull model is used.
Currently the error messages for failing tests in the TimeZoneRoundingTests test
suite are hard to read because they usually report the actual end expected date
in milliseconds utc (e.g. "Expected: <1414270860000L> but: was <1414270800000L>".
This makes failing tests hard to read.
This change introduces a new Matcher that can be used for equality checks for
long dates but reports the error both as a formated date string according to
some time zone and also as the actual long values, so you get messages like
"Expected: 2014-10-26T00:01:00.000+03:00 [1414270860000] but: was
"2014-10-26T00:00:00.000+03:00 [1414270800000]".
Also clean cleaning up some helper methods and generally simplifying a few test
cases. Otherwise this change shouldn't affect either the scope of the test or
anything about the rounding implementation itself.
They have been implemented in https://issues.apache.org/jira/browse/LUCENE-7289.
Ranges are implemented so that the accuracy loss only occurs at index time,
which means that if you are searching for values between A and B, the query will
match exactly all documents whose value rounded to the closest half-float point
is between A and B.
Registering a script engine or native scripts still uses Guice today
and is much more complicated than needed. This change moves to a pull
based model where script plugins have to implement a dedicated interface
`ScriptPlugin` and defines simple getter returning instances rather than
classes.
In 2.0 we added plugin descriptors which require defining a name and
description for the plugin. However, we still have name() and
description() which must be overriden from the Plugin class. This still
exists for classpath plugins. But classpath plugins are mainly for
tests, and even then, referring to classpath plugins with their class is
a better idea. This change removes name() and description(), replacing
the name for classpath plugins with the full class name.
This interface used to have dedicated methods to prevent calling execute
methods. These methods are unnecessary as the checks can simply be
done inside the execute methods itself. This simplifies the interface
as well as its usage.
With this commit we exclude certain HTTP requests that are needed to inspect the cluster
from HTTP request limiting to ensure these commands are processed even in critical
memory conditions.
Relates #17951, relates #18145, closes#18833
When installing plugins, we first try the elastic download service for
official plugins, then try maven coordinates, and finally try the
argument as a url. This can lead to confusing error messages about
unknown protocols when eg an official plugin name is mispelled. This
change adds a heuristic for determining if the argument in the final
case is in fact a url that we should try, and gives a simplified error
message in the case it is definitely not a url.
closes#17226
The search preference _prefer_node allows specifying a single node to
prefer when routing a request. This functionality can be enhanced by
permitting multiple nodes to be preferred. This commit replaces the
search preference _prefer_node with the search preference _prefer_nodes
which supplants the former by specifying a single node and otherwise
adds functionality.
Relates #18872
This caches FieldStats at the field level. For one off requests or for
few indicies this doesn't save anything, but when there are 30 indices,
5 shards, 1 replica, 100 parallel requests this is about twice as fast
as not caching. I expect lots of usage won't see much benefit from this
but pointing kibana to a cluster with many indexes and shards, will be
faster.
Closes#18717
This adds a get task API that supports GET /_tasks/${taskId} and
removes that responsibility from the list tasks API. The get task
API supports wait_for_complation just as the list tasks API does
but doesn't support any of the list task API's filters. In exchange,
it supports falling back to the .results index when the task isn't
running any more. Like any good GET API it 404s when it doesn't
find the task.
Then we change reindex, update-by-query, and delete-by-query to
persist the task result when wait_for_completion=false. The leads
to the neat behavior that, once you start a reindex with
wait_for_completion=false, you can fetch the result of the task by
using the get task API and see the result when it has finished.
Also rename the .results index to .tasks.
There are edge cases where rounding a date to a certain interval using a time
zone with DST shifts can currently cause the rounded date to be bigger than the
original date. This happens when rounding a date closely after a DST start and
the rounded date falls into the DST gap.
Here is an example for CET time zone, where local time is set forward by one
hour at 2016-03-27T02:00:00+01:00 to 2016-03-27T03:00:00.000+02:00:
The date 2016-03-27T03:01:00.000+02:00 (1459040460000) which is just after the
DST change is first converted to local time (1459047660000). If we then apply
interval rounding for a 14m interval in local time, this takes us to
1459047240000, which unfortunately falls into the DST gap. When converting
this back to UTC, joda provides options to throw exceptions on illegal dates
like this, or correct this by adjusting the date to the new time zone offset.
We currently do the later, but this leads to converting this illegal date back
to 2016-03-27T03:54:00.000+02:00 (1459043640000), giving us a date that is
larger than the original date we wanted to round.
This change fixes this by using the "strict" option of 'convertLocalToUTC()'
to detect rounded dates that fall into the DST gap. If this happens, we can use
the time of the DST change instead as the interval start.
Even before this change, intervals around DST shifts like this can be shorter
than the desired interval. This, for example, happens when the requested
interval width doesn't completely fit into the remaining time span when the DST
shift happens. For example, using a 14m interval in UTC+1 (CET before DST
starts) leads to the following valid rounding values around the time where DST
happens:
2016-03-27T01:30:00+01:00
2016-03-27T01:44:00+01:00
2016-03-27T01:58:00+01:00
2016-03-27T02:12:00+01:00
2016-03-27T02:26:00+01:00
...
while the rounding values in UTC+2 (CET after DST start) are placed like this
around the same time:
2016-03-27T02:40:00+02:00
2016-03-27T02:54:00+02:00
2016-03-27T03:08:00+02:00
2016-03-27T03:22:00+02:00
...
From this we can see then when we switch from UTC+1 to UTC+2 at 02:00 the last
rounding value in UTC+1 is at 01:58 and the first valid one in UTC+2 is at
03:08, so even if we decide to put all the dates in between into one rounding
interval, it will only cover 10 minutes. With this change we choose to use the
moment of DST shift as an aditional interval separator, leaving us with a 2min
interval from [01:58,02:00) before the shift and an 8min interval from
[03:00,03:08) after the shift.
This change also adds tests for the above example and adds randomization to the
existing TimeIntervalRounding tests.
By default the number of searches msearch executes is capped by the number of
nodes multiplied with the default size of the search threadpool. This default can be
overwritten by using the newly added `max_concurrent_searches` parameter.
Before the msearch api would concurrently execute all searches concurrently. If many large
msearch requests would be executed this could lead to some searches being rejected
while other searches in the msearch request would succeed.
The goal of this change is to avoid this exhausting of the search TP.
Closes#17926
Writeable is better for immutable objects like TimeValue.
Switch to writeZLong which takes up less space than the original
writeLong in the majority of cases. Since we expect negative
TimeValues we shouldn't use
writeVLong.
Today we use a random source of UUIDs for assigning allocation IDs,
cluster IDs, etc. Yet, the source of randomness for this is not
reproducible in tests. Since allocation IDs end up as keys in hash maps,
this means allocation decisions and not reproducible in tests and this
leads to non-reproducible test failures. This commit modifies the
behavior of random UUIDs so that they are reproducible under tests. The
behavior for production code is not changed, we still use a true source
of secure randomness but under tests we just use a reproducible source
of non-secure randomness.
It is important to note that there is a test,
UUIDTests#testThreadedRandomUUID that relies on the UUIDs being truly
random. Thus, we have to modify the setup for this test to use a true
source of randomness. Thus, this is one test that will never be
reproducible but it is intentionally so.
Relates #18808
When trying to restore a snapshot of an index created in a previous
version of Elasticsearch, it is possible that empty shards in the
snapshot have a segments_N file that has an unsupported Lucene version
and a missing checksum. This leads to issues with restoring the
snapshot. This commit handles this special case by avoiding a restore
of a shard that has no data, since there is nothing to restore anyway.
Closes#18707
Testability of ICSS is achieved by introducing interfaces for IndicesService, IndexService and IndexShard. These interfaces extract all relevant methods used by ICSS (which do not deal directly with store) and give the possibility to easily mock all the store behavior away in the tests (and cuts down on dependencies).
Add Aggregation profiling initially only be for the shard phases (i.e. the reduce phase will not be profiled in this change)
This change refactors the query profiling class to extract abstract classes where it is useful for other profiler types to share code.
It presented as listeners never being called if you refresh at the same
time as the listener is added. It was caught rarely by
testConcurrentRefresh. mostly this is removing code and adding a comment:
```
Note that it is not safe for us to abort early if we haven't advanced the
position here because we set and read lastRefreshedLocation outside of a
synchronized block. We do that so that waiting for a refresh that has
already passed is just a volatile read but the cost is that any check
whether or not we've advanced the position will introduce a race between
adding the listener and the position check. We could work around this by
moving this assignment into the synchronized block below and double
checking lastRefreshedLocation in addOrNotify's synchronized block but
that doesn't seem worth it given that we already skip this process early
if there aren't any listeners to iterate.
```
This commit addresses a performance issue in
IndicesClusterStateService#applyDeletedShards. Namely, the current
implementation is O(number of indices * number of shards). This is
because of an outer loop over the indices and an inner loop over the
assigned shards, all to check if a shard is in the outer index. Instead,
we can group the shards by index, and then just do a map lookup for each
index.
Testing this on a single-node with 2500 indices, each with 2 shards,
creating an index before this optimization takes 0.90s and after this
optimization takes 0.19s.
Relates #18788
You declare them like
```
static {
PARSER.declareInt(optionalConstructorArg(), new ParseField("animal"));
}
```
Other than being optional they follow all of the rules of regular
`constructorArg()`s. Parsing an object with optional constructor args
is going to be slightly less efficient than parsing an object with
all required args if some of the optional args aren't specified because
ConstructingObjectParser isn't able to build the target before the
end of the json object.
Due to an error in our current TimeIntervalRounding, two dates can
round to the same key, even when they are 1h apart when using
short interval roundings (e.g. 20m) and a time zone with DST change.
Here is an example for the CET time zone:
On 25 October 2015, 03:00:00 clocks are turned backward 1 hour to
02:00:00 local standard time. The dates
"2015-10-25T02:15:00+02:00" (1445732100000) (before DST end) and
"2015-10-25T02:15:00+01:00" (1445735700000) (after DST end)
are thus 1h apart, but currently they round to the same value
"2015-10-25T02:00:00.000+01:00" (1445734800000).
This violates an important invariant of rounding, namely that the
rounded value must be less or equal to the value that is rounded.
It also leads to wrong histogram bucket counts because documents in
[02:00:00+02:00, 02:20:00+02:00) go to the same bucket as documents
from [02:00:00+01:00, 02:20:00+01:00).
The problem happens because in TimeIntervalRounding#roundKey() we
need to perform the rounding operation in local time, but on
converting back to UTC we don't honor the original values time zone
offset. This fix changes that and adds tests both for DST start and
DST end as well as a test that demonstrates what happens to bucket
sizes when the dst change is not evently divisibly by the interval.
Previously Elasticsearch used $DATA_DIR/$CLUSTER_NAME/nodes for the path
where data is stored, this commit changes that to be $DATA_DIR/nodes.
On startup, if the old folder structure is detected it will be used.
This behavior will be removed in Elasticsearch 6.0
Resolves#17810
Folded grok processor into ingest-common module.
The rest tests have been moved to ingest-common module as well, because these tests don't run in the rest-api-spec module but in the distribution:integ-test-zip module
and adding a test plugin there felt just wrong to me. I think this is ok. I left a tiny ingest rest test behind in that tests with an empty pipeline.
Removed messy tests, these tests were already covered in the rest tests
Added ingest test plugin in test infra so that each module testing integration with ingest doesn't need write its own plugin
Moved reindex ingest tests to qa module
Closes#18490
API:
```
curl -XGET 'localhost:9200/twitter/tweet/_search?scroll=1m' -d '{
"slice": {
"field": "_uid", <1>
"id": 0, <2>
"max": 10 <3>
},
"query": {
"match" : {
"title" : "elasticsearch"
}
}
}
```
<1> (optional) The field name used to do the slicing (_uid by default)
<2> The id of the slice
By default the splitting is done on the shards first and then locally on each shard using the _uid field
with the following formula:
`slice(doc) = floorMod(hashCode(doc._uid), max)`
For instance if the number of shards is equal to 2 and the user requested 4 slices then the slices 0 and 2 are assigned
to the first shard and the slices 1 and 3 are assigned to the second shard.
Each scroll is independent and can be processed in parallel like any scroll request.
Closes#13494
This commit modifies the bootstrap check invocations in the might fork
tests to use the underlying test name when setting up the logging prefix
when invoking the bootstrap checks. This is done to give clear logs in
case of failure.
Today we allow to shrink to 1 shard but that might not be possible due to
too many document or a single shard doesn't meet the requirements for the index.
The logic can be expanded to N shards if the source index shards is a multiple of N.
This guarantees that there are not hotspots created due to different number of shards
being shrunk into one.
This commit fixes a compilation issue in RefreshListenersTests that
arose from code being integrated into master, and then a large pull
request refactoring the handling of thread pools was later merged into
master.
This commit adds a bootstrap check for the JVM option OnError being in
use and seccomp being enabled. These two options are incompatible
because OnError allows the user to specify an arbitrary program to fork
when the JVM encounters an fatal error, and seccomp enables system call
filters that prevents forking.
This commit refactors the handling of thread pool settings so that the
individual settings can be registered rather than registering the top
level group. With this refactoring, individual plugins must now register
their own settings for custom thread pools that they need, but a
dedicated API is provided for this in the thread pool module. This
commit also renames the prefix on the thread pool settings from
"threadpool" to "thread_pool". This enables a hard break on the settings
so that:
- some of the settings can be given more sensible names (e.g., the max
number of threads in a scaling thread pool is now named "max" instead
of "size")
- change the soft limit on the number of threads in the bulk and
indexing thread pools to a hard limit
- the settings names for custom plugins for thread pools can be
prefixed (e.g., "xpack.watcher.thread_pool.size")
- remove dynamic thread pool settings
Relates #18674
This commit adds a bootstrap check for the JVM option OnOutOfMemoryError
being in use and seccomp being enabled. These two options are
incompatible because OnOutOfMemoryError allows the user to specify an
arbitrary program to fork when the JVM encounters an
OutOfMemoryError, and seccomp enables system call filters that prevents
forking.
This commit also adds support for bootstrap checks that are always
enforced, whether or not Elasticsearch is in production mode.