Today we use a random source of UUIDs for assigning allocation IDs,
cluster IDs, etc. Yet, the source of randomness for this is not
reproducible in tests. Since allocation IDs end up as keys in hash maps,
this means allocation decisions and not reproducible in tests and this
leads to non-reproducible test failures. This commit modifies the
behavior of random UUIDs so that they are reproducible under tests. The
behavior for production code is not changed, we still use a true source
of secure randomness but under tests we just use a reproducible source
of non-secure randomness.
It is important to note that there is a test,
UUIDTests#testThreadedRandomUUID that relies on the UUIDs being truly
random. Thus, we have to modify the setup for this test to use a true
source of randomness. Thus, this is one test that will never be
reproducible but it is intentionally so.
Relates #18808
When trying to restore a snapshot of an index created in a previous
version of Elasticsearch, it is possible that empty shards in the
snapshot have a segments_N file that has an unsupported Lucene version
and a missing checksum. This leads to issues with restoring the
snapshot. This commit handles this special case by avoiding a restore
of a shard that has no data, since there is nothing to restore anyway.
Closes#18707
Testability of ICSS is achieved by introducing interfaces for IndicesService, IndexService and IndexShard. These interfaces extract all relevant methods used by ICSS (which do not deal directly with store) and give the possibility to easily mock all the store behavior away in the tests (and cuts down on dependencies).
Add Aggregation profiling initially only be for the shard phases (i.e. the reduce phase will not be profiled in this change)
This change refactors the query profiling class to extract abstract classes where it is useful for other profiler types to share code.
It presented as listeners never being called if you refresh at the same
time as the listener is added. It was caught rarely by
testConcurrentRefresh. mostly this is removing code and adding a comment:
```
Note that it is not safe for us to abort early if we haven't advanced the
position here because we set and read lastRefreshedLocation outside of a
synchronized block. We do that so that waiting for a refresh that has
already passed is just a volatile read but the cost is that any check
whether or not we've advanced the position will introduce a race between
adding the listener and the position check. We could work around this by
moving this assignment into the synchronized block below and double
checking lastRefreshedLocation in addOrNotify's synchronized block but
that doesn't seem worth it given that we already skip this process early
if there aren't any listeners to iterate.
```
This commit addresses a performance issue in
IndicesClusterStateService#applyDeletedShards. Namely, the current
implementation is O(number of indices * number of shards). This is
because of an outer loop over the indices and an inner loop over the
assigned shards, all to check if a shard is in the outer index. Instead,
we can group the shards by index, and then just do a map lookup for each
index.
Testing this on a single-node with 2500 indices, each with 2 shards,
creating an index before this optimization takes 0.90s and after this
optimization takes 0.19s.
Relates #18788
The painless whitelist has a lot of self-checking, in this case, it checks
for missing covariant overrides. It fails on java 9, because LocalDate.getEra()
now returns IsoEra instead of Era: https://bugs.openjdk.java.net/browse/JDK-8072746
To our checker, it thinks we were lazy with whitelisting :)
This means painless works on java 9 again
The database files have been doubled in size compared to the previous files being used.
For this reason the database files are now gzip compressed, which required using
`GZIPInputStream` when loading database files.
You declare them like
```
static {
PARSER.declareInt(optionalConstructorArg(), new ParseField("animal"));
}
```
Other than being optional they follow all of the rules of regular
`constructorArg()`s. Parsing an object with optional constructor args
is going to be slightly less efficient than parsing an object with
all required args if some of the optional args aren't specified because
ConstructingObjectParser isn't able to build the target before the
end of the json object.