This commit addresses a performance issue in
IndicesClusterStateService#applyDeletedShards. Namely, the current
implementation is O(number of indices * number of shards). This is
because of an outer loop over the indices and an inner loop over the
assigned shards, all to check if a shard is in the outer index. Instead,
we can group the shards by index, and then just do a map lookup for each
index.
Testing this on a single-node with 2500 indices, each with 2 shards,
creating an index before this optimization takes 0.90s and after this
optimization takes 0.19s.
Relates #18788
The painless whitelist has a lot of self-checking, in this case, it checks
for missing covariant overrides. It fails on java 9, because LocalDate.getEra()
now returns IsoEra instead of Era: https://bugs.openjdk.java.net/browse/JDK-8072746
To our checker, it thinks we were lazy with whitelisting :)
This means painless works on java 9 again
The database files have been doubled in size compared to the previous files being used.
For this reason the database files are now gzip compressed, which required using
`GZIPInputStream` when loading database files.
You declare them like
```
static {
PARSER.declareInt(optionalConstructorArg(), new ParseField("animal"));
}
```
Other than being optional they follow all of the rules of regular
`constructorArg()`s. Parsing an object with optional constructor args
is going to be slightly less efficient than parsing an object with
all required args if some of the optional args aren't specified because
ConstructingObjectParser isn't able to build the target before the
end of the json object.
Due to an error in our current TimeIntervalRounding, two dates can
round to the same key, even when they are 1h apart when using
short interval roundings (e.g. 20m) and a time zone with DST change.
Here is an example for the CET time zone:
On 25 October 2015, 03:00:00 clocks are turned backward 1 hour to
02:00:00 local standard time. The dates
"2015-10-25T02:15:00+02:00" (1445732100000) (before DST end) and
"2015-10-25T02:15:00+01:00" (1445735700000) (after DST end)
are thus 1h apart, but currently they round to the same value
"2015-10-25T02:00:00.000+01:00" (1445734800000).
This violates an important invariant of rounding, namely that the
rounded value must be less or equal to the value that is rounded.
It also leads to wrong histogram bucket counts because documents in
[02:00:00+02:00, 02:20:00+02:00) go to the same bucket as documents
from [02:00:00+01:00, 02:20:00+01:00).
The problem happens because in TimeIntervalRounding#roundKey() we
need to perform the rounding operation in local time, but on
converting back to UTC we don't honor the original values time zone
offset. This fix changes that and adds tests both for DST start and
DST end as well as a test that demonstrates what happens to bucket
sizes when the dst change is not evently divisibly by the interval.
Previously Elasticsearch used $DATA_DIR/$CLUSTER_NAME/nodes for the path
where data is stored, this commit changes that to be $DATA_DIR/nodes.
On startup, if the old folder structure is detected it will be used.
This behavior will be removed in Elasticsearch 6.0
Resolves#17810
Unless explicitly disabled, the parallel new collector is enabled
automatically as soon as the CMS collector is enabled:
void Arguments::set_cms_and_parnew_gc_flags() {
assert(
!UseSerialGC && !UseParallelOldGC && !UseParallelGC,
"Error");
assert(UseConcMarkSweepGC, "CMS is expected to be on here");
// If we are using CMS, we prefer to UseParNewGC,
// unless explicitly forbidden.
if (FLAG_IS_DEFAULT(UseParNewGC)) {
FLAG_SET_ERGO(bool, UseParNewGC, true);
}
While it's fine to be explicit, the UseParNewGC flag is deprecatd in JDK
8 and produces warning messages in JDK 9:
Java HotSpot(TM) 64-Bit Server VM warning: Option UseParNewGC was
deprecated in version 9.0 and will likely be removed in a future
release.
Thus, we can and should just remove this flag from the default JVM
options.
Relates #18767
Folded grok processor into ingest-common module.
The rest tests have been moved to ingest-common module as well, because these tests don't run in the rest-api-spec module but in the distribution:integ-test-zip module
and adding a test plugin there felt just wrong to me. I think this is ok. I left a tiny ingest rest test behind in that tests with an empty pipeline.
Removed messy tests, these tests were already covered in the rest tests
Added ingest test plugin in test infra so that each module testing integration with ingest doesn't need write its own plugin
Moved reindex ingest tests to qa module
Closes#18490
API:
```
curl -XGET 'localhost:9200/twitter/tweet/_search?scroll=1m' -d '{
"slice": {
"field": "_uid", <1>
"id": 0, <2>
"max": 10 <3>
},
"query": {
"match" : {
"title" : "elasticsearch"
}
}
}
```
<1> (optional) The field name used to do the slicing (_uid by default)
<2> The id of the slice
By default the splitting is done on the shards first and then locally on each shard using the _uid field
with the following formula:
`slice(doc) = floorMod(hashCode(doc._uid), max)`
For instance if the number of shards is equal to 2 and the user requested 4 slices then the slices 0 and 2 are assigned
to the first shard and the slices 1 and 3 are assigned to the second shard.
Each scroll is independent and can be processed in parallel like any scroll request.
Closes#13494
This commit modifies the bootstrap check invocations in the might fork
tests to use the underlying test name when setting up the logging prefix
when invoking the bootstrap checks. This is done to give clear logs in
case of failure.