Currently MetaDataStateFormat loads the first available state file that has
the latest version. In case several files are available and some of them use
the new format while other ones use the legacy format, it should also prefer
the new format. This is typically useful when we upgrade the metadata when
recovering from the gateway: we might write the upgraded state with the new
format while the previous state used the legacy format, so we end up with
two files having the same version but using different formats.
Close#8343
This will help the exists/missing filters behave as expected in presence of
empty strings, as well as when using a default analyzer that would generate
tokens for an empty string (uncommon).
Close#8198
The transport client created within ExternalTestCluster needs a name that follows our naming convention otherwise the thread leak filter barfs when running tests against an external cluster. Used "transport_client_external_{n}" where n gets incremented every time a new external cluster gets created. Updated thread leak filters rules to ignore threads created by such transport client.
We currently use the djb2 hash function in order to compute the shard a
document should go to. Unfortunately this hash function is not very
sophisticated and you can sometimes hit adversarial cases, such as numeric ids
on 33 shards.
Murmur3 generates hashes with a better distribution, which should avoid the
adversarial cases.
Here are some examples of how 100000 incremental ids are distributed to shards
using either djb2 or murmur3.
5 shards:
Murmur3: [19933, 19964, 19940, 20030, 20133]
DJB: [20000, 20000, 20000, 20000, 20000]
3 shards:
Murmur3: [33185, 33347, 33468]
DJB: [30100, 30000, 39900]
33 shards:
Murmur3: [2999, 3096, 2930, 2986, 3070, 3093, 3023, 3052, 3112, 2940, 3036, 2985, 3031, 3048, 3127, 2961, 2901, 3105, 3041, 3130, 3013, 3035, 3031, 3019, 3008, 3022, 3111, 3086, 3016, 2996, 3075, 2945, 2977]
DJB: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 900, 900, 900, 900, 1000, 1000, 10000, 10000, 10000, 10000, 9100, 9100, 9100, 9100, 9000, 9000, 0, 0, 0, 0, 0, 0]
Even if djb2 looks ideal in some cases (5 shards), the fact that the
distribution of its hashes has some patterns can raise issues with some shard
counts (eg. 3, or even worse 33).
Some tests have been modified because they relied on implementation details of
the routing hash function.
Close#7954
ClusterDiscoveryConfiguration is part of the test infra and should get exported as part of the test jar. This is achieved by moving the class to org.elasticsearch.test.discovery
Closes#8337
NettyTransport*Tests were previously in org.elasticsearch.test.transport and ended up being exported with the test jar. org.elasticsearch.transport.netty should be a better place for them together with exising tests.
The test verifies the correct behavior of a listener but we only call the listener after publishing a new cluster state. Only checking on the publishing of the state introduces a racing condition.
This commit removes all special file handling from DistributorDirectory
that assigned certain files to the primary directory. This special handling
was added to ensure that files that are written more than once are essentially
overwritten. Yet this implementation is consistent all the time and doesn't need
this special handling for files that are written through this directory. Writes
to the underlying directory not going through the distributor directory are not
and have never been supported.
Note: this commit also fixes the problem of adding directories to the distributor
during restart where the primary can suddenly change and file mappings are by-passed.
Closes#8276
Also added:
* Better exception handling in UnicastZenPing#ping and MulticastZenPing#ping
* In the join thread that runs the innerJoinCluster loop, remember the last known exception and throw that when assertions are enabled. We loop until inner join has completed and if continues exceptions are thrown we should fail the test, because the exception shouldn't occur in production (at least not too often).
Applied feedback 3
Closes#8327
This commit adds the ability to associate a bit of state with each
individual aggregation.
The aggregation response can be hard to stitch back together without
having a reference to the aggregation request. In many cases this is not
available, many json serializer frameworks cache types globally or have a
static deserialisation override mechanism. In these cases making the
original request available, if at all possible, would be a hack.
The old facets returned `_type` which was just enough metadata to know
what the originating facet type in the request was.
This PR takes `_type` one step further by introducing ANY arbitrary meta
data. This could be further <strike>ab</strike>used for instance by
generic/automated aggregations that include UI state (color information,
thresholds, user input states, etc) per aggregation.
The discovery.zen.minimum_master_nodes setting can be updated dynamically. Settings it to a value higher then the current number of master nodes will cause the current master to step down. This is dangerous because if done by mistake (typo) there is no way to restore the settings (this requires an active master).
Closes#8321
version.
An early version of #7966 had the ability to choose a bwc version
automatically, but this was removed before the change was committed.
However, the change was not removed from the ongoing work in #7922
and it made it in unknowningly.
If includes or excludes are set
XContentFactory.xcontentBuilder() allocates a new
BytesStreamOutput using the default page size which is 16kb.
Can be optimized to use the length of the sourceRef because
that is the maximum possible size that the streamOutput will
use.
This redcues the amount of memory allocated for a request
that is fetching 200.000 small documents (~150 bytes each)
by about 300 MB
Close#8138
This adds HTTP pipelining support to netty. Previously pipelining was not
supported due to the asynchronous nature of elasticsearch. The first request
that was returned by Elasticsearch, was returned as first response,
regardless of the correct order.
The solution to this problem is to add a handler to the netty pipeline
that maintains an ordered list and thus orders the responses before
returning them to the client. This means, we will always have some state
on the server side and also requires some memory in order to keep the
responses there.
Pipelining is enabled by default, but can be configured by setting the
http.pipelining property to true|false. In addition the maximum size of
the event queue can be configured.
The initial netty handler is copied from this repo
https://github.com/typesafehub/netty-http-pipeliningCloses#2665
This changes the weighing function for the filter cache to use a
configurable minimum weight for each filter cached. This value defaults
to 1kb and can be configured with the
`indices.cache.filter.minimum_entry_weight` setting.
This also fixes an issue with the filter cache where the concurrency
level of the cache was exposed as a setting, but not used in cache
construction.
Relates to #8268
This adds a Listener interface to the ClusterInfoService, this is used
by the DiskThresholdDecider, which adds a listener to check for nodes
passing the high watermark. If a node is past the high watermark an
empty reroute is issued so shards can be reallocated if desired.
A reroute will only be issued once every
`cluster.routing.allocation.disk.reroute_interval`, which is "60s" by
default.
Refactors InternalClusterInfoService to delegate the nodes stats and
indices stats gathering into separate methods so they have be overriden
by extending classes. Each stat gathering method returns a
CountDownLatch that can be used to wait until processing for that part
is successful before calling the listeners.
Fixes#8146