By default the number of searches msearch executes is capped by the number of
nodes multiplied with the default size of the search threadpool. This default can be
overwritten by using the newly added `max_concurrent_searches` parameter.
Before the msearch api would concurrently execute all searches concurrently. If many large
msearch requests would be executed this could lead to some searches being rejected
while other searches in the msearch request would succeed.
The goal of this change is to avoid this exhausting of the search TP.
Closes#17926
Writeable is better for immutable objects like TimeValue.
Switch to writeZLong which takes up less space than the original
writeLong in the majority of cases. Since we expect negative
TimeValues we shouldn't use
writeVLong.
Today we use a random source of UUIDs for assigning allocation IDs,
cluster IDs, etc. Yet, the source of randomness for this is not
reproducible in tests. Since allocation IDs end up as keys in hash maps,
this means allocation decisions and not reproducible in tests and this
leads to non-reproducible test failures. This commit modifies the
behavior of random UUIDs so that they are reproducible under tests. The
behavior for production code is not changed, we still use a true source
of secure randomness but under tests we just use a reproducible source
of non-secure randomness.
It is important to note that there is a test,
UUIDTests#testThreadedRandomUUID that relies on the UUIDs being truly
random. Thus, we have to modify the setup for this test to use a true
source of randomness. Thus, this is one test that will never be
reproducible but it is intentionally so.
Relates #18808
When trying to restore a snapshot of an index created in a previous
version of Elasticsearch, it is possible that empty shards in the
snapshot have a segments_N file that has an unsupported Lucene version
and a missing checksum. This leads to issues with restoring the
snapshot. This commit handles this special case by avoiding a restore
of a shard that has no data, since there is nothing to restore anyway.
Closes#18707
Testability of ICSS is achieved by introducing interfaces for IndicesService, IndexService and IndexShard. These interfaces extract all relevant methods used by ICSS (which do not deal directly with store) and give the possibility to easily mock all the store behavior away in the tests (and cuts down on dependencies).
Add Aggregation profiling initially only be for the shard phases (i.e. the reduce phase will not be profiled in this change)
This change refactors the query profiling class to extract abstract classes where it is useful for other profiler types to share code.
It presented as listeners never being called if you refresh at the same
time as the listener is added. It was caught rarely by
testConcurrentRefresh. mostly this is removing code and adding a comment:
```
Note that it is not safe for us to abort early if we haven't advanced the
position here because we set and read lastRefreshedLocation outside of a
synchronized block. We do that so that waiting for a refresh that has
already passed is just a volatile read but the cost is that any check
whether or not we've advanced the position will introduce a race between
adding the listener and the position check. We could work around this by
moving this assignment into the synchronized block below and double
checking lastRefreshedLocation in addOrNotify's synchronized block but
that doesn't seem worth it given that we already skip this process early
if there aren't any listeners to iterate.
```
This commit addresses a performance issue in
IndicesClusterStateService#applyDeletedShards. Namely, the current
implementation is O(number of indices * number of shards). This is
because of an outer loop over the indices and an inner loop over the
assigned shards, all to check if a shard is in the outer index. Instead,
we can group the shards by index, and then just do a map lookup for each
index.
Testing this on a single-node with 2500 indices, each with 2 shards,
creating an index before this optimization takes 0.90s and after this
optimization takes 0.19s.
Relates #18788
You declare them like
```
static {
PARSER.declareInt(optionalConstructorArg(), new ParseField("animal"));
}
```
Other than being optional they follow all of the rules of regular
`constructorArg()`s. Parsing an object with optional constructor args
is going to be slightly less efficient than parsing an object with
all required args if some of the optional args aren't specified because
ConstructingObjectParser isn't able to build the target before the
end of the json object.
Due to an error in our current TimeIntervalRounding, two dates can
round to the same key, even when they are 1h apart when using
short interval roundings (e.g. 20m) and a time zone with DST change.
Here is an example for the CET time zone:
On 25 October 2015, 03:00:00 clocks are turned backward 1 hour to
02:00:00 local standard time. The dates
"2015-10-25T02:15:00+02:00" (1445732100000) (before DST end) and
"2015-10-25T02:15:00+01:00" (1445735700000) (after DST end)
are thus 1h apart, but currently they round to the same value
"2015-10-25T02:00:00.000+01:00" (1445734800000).
This violates an important invariant of rounding, namely that the
rounded value must be less or equal to the value that is rounded.
It also leads to wrong histogram bucket counts because documents in
[02:00:00+02:00, 02:20:00+02:00) go to the same bucket as documents
from [02:00:00+01:00, 02:20:00+01:00).
The problem happens because in TimeIntervalRounding#roundKey() we
need to perform the rounding operation in local time, but on
converting back to UTC we don't honor the original values time zone
offset. This fix changes that and adds tests both for DST start and
DST end as well as a test that demonstrates what happens to bucket
sizes when the dst change is not evently divisibly by the interval.
Previously Elasticsearch used $DATA_DIR/$CLUSTER_NAME/nodes for the path
where data is stored, this commit changes that to be $DATA_DIR/nodes.
On startup, if the old folder structure is detected it will be used.
This behavior will be removed in Elasticsearch 6.0
Resolves#17810
Folded grok processor into ingest-common module.
The rest tests have been moved to ingest-common module as well, because these tests don't run in the rest-api-spec module but in the distribution:integ-test-zip module
and adding a test plugin there felt just wrong to me. I think this is ok. I left a tiny ingest rest test behind in that tests with an empty pipeline.
Removed messy tests, these tests were already covered in the rest tests
Added ingest test plugin in test infra so that each module testing integration with ingest doesn't need write its own plugin
Moved reindex ingest tests to qa module
Closes#18490
API:
```
curl -XGET 'localhost:9200/twitter/tweet/_search?scroll=1m' -d '{
"slice": {
"field": "_uid", <1>
"id": 0, <2>
"max": 10 <3>
},
"query": {
"match" : {
"title" : "elasticsearch"
}
}
}
```
<1> (optional) The field name used to do the slicing (_uid by default)
<2> The id of the slice
By default the splitting is done on the shards first and then locally on each shard using the _uid field
with the following formula:
`slice(doc) = floorMod(hashCode(doc._uid), max)`
For instance if the number of shards is equal to 2 and the user requested 4 slices then the slices 0 and 2 are assigned
to the first shard and the slices 1 and 3 are assigned to the second shard.
Each scroll is independent and can be processed in parallel like any scroll request.
Closes#13494
This commit modifies the bootstrap check invocations in the might fork
tests to use the underlying test name when setting up the logging prefix
when invoking the bootstrap checks. This is done to give clear logs in
case of failure.
Today we allow to shrink to 1 shard but that might not be possible due to
too many document or a single shard doesn't meet the requirements for the index.
The logic can be expanded to N shards if the source index shards is a multiple of N.
This guarantees that there are not hotspots created due to different number of shards
being shrunk into one.
This commit fixes a compilation issue in RefreshListenersTests that
arose from code being integrated into master, and then a large pull
request refactoring the handling of thread pools was later merged into
master.
This commit adds a bootstrap check for the JVM option OnError being in
use and seccomp being enabled. These two options are incompatible
because OnError allows the user to specify an arbitrary program to fork
when the JVM encounters an fatal error, and seccomp enables system call
filters that prevents forking.
This commit refactors the handling of thread pool settings so that the
individual settings can be registered rather than registering the top
level group. With this refactoring, individual plugins must now register
their own settings for custom thread pools that they need, but a
dedicated API is provided for this in the thread pool module. This
commit also renames the prefix on the thread pool settings from
"threadpool" to "thread_pool". This enables a hard break on the settings
so that:
- some of the settings can be given more sensible names (e.g., the max
number of threads in a scaling thread pool is now named "max" instead
of "size")
- change the soft limit on the number of threads in the bulk and
indexing thread pools to a hard limit
- the settings names for custom plugins for thread pools can be
prefixed (e.g., "xpack.watcher.thread_pool.size")
- remove dynamic thread pool settings
Relates #18674
This commit adds a bootstrap check for the JVM option OnOutOfMemoryError
being in use and seccomp being enabled. These two options are
incompatible because OnOutOfMemoryError allows the user to specify an
arbitrary program to fork when the JVM encounters an
OutOfMemoryError, and seccomp enables system call filters that prevents
forking.
This commit also adds support for bootstrap checks that are always
enforced, whether or not Elasticsearch is in production mode.