The compression bug fixed in #7210 can still strike us since we are
running BWC test against these version. This commit disables compression
forcefully if the compatibility version is < 1.3.2 to prevent debugging
already known issues.
The query_string query has an option for analyzing wildcard/prefix (#787) by a best effort approach.
This adds `analyze_wildcard` option also to simple_query_string.
The default is set to `false` so the existing behavior of simple_query_string is unchanged.
When data nodes receive mapping updates from the master, the parse it and merge it into their own in memory representation (if there). If this results in different bytes then the master sent, the nodes will send a refresh-mapping command to indicate to the master that it's byte level storage of the mapping should be refreshed via the document mappers. This comes handy when the mapping format has changed, in a backwards compatible manner, and we want to make sure we can still rely on the bytes to identify changes. An example of such a change can be seen at #4760.
This commit extends the logic to include the `_default_` type, which was never refreshed before. In some unlucky scenarios, this caused the _default_ mapping to be parsed with every cluster state update.
Closes#8413
This prevents too-difficult regular expressions from consuming
excessive RAM/CPU; the default max_determinized_states is 10,000 (same
as Lucene) but query_string and regepx query/filter can override
per-request.
The also upgrades to a new Lucene 5.0.0 snapshot.
Closes#8386Closes#8357
DocIdSets.isFast(DocIdSet) has two issues:
- it works on the DocIdSet interface while some doc sets can generate either
slow or fast sets depending on their options (eg. whether an OrDocIdSet is
fast or not depends on the wrapped clauses).
- it only works because the result of this method is only taken into account
when a DocIdSet has non-null `bits()`.
This commit changes this method to work on top of a DocIdSetIterator and to use
a black-list rather than a white list: slow iterators should really be the
exception rather than the rule.
Close#8380
The rename(String, String) method doesn't allow this implementation to use a simple
concurrent map. There is a race during a rename operation where files are not fully
renamed but already visible via #listAll(). This inconsistency can lead to problems
when opening commit points since the pending_segments_N as well as segments_N are visible
but not yet atomically renamed.
Yet, non of the methods that are synced are long running such that adding sychronization
doesn't introduce bottlenecks here. The Direcotry#sync(...) method is not synchronized since
it doesn't change any mapping nor does it depend on the mapping.
Previously we didn't calculate this checksums even though we have a checksum
to compare. Since we now also verify checksums for legacy files #checkIntegrity
should also calculate the legacy checksums.
Closes#8407
When a lucene 4.8+ file is transferred, Store returns a VerifyingIndexOutput
that verifies both the CRC32 integrity and the length of the file.
However, for older files, problems can make it to the lucene level. This is not great
since older lucene files aren't especially strong as far as detecting issues here.
For example, if a network transfer is closed on the remote side, we might write a
truncated file... which old lucene formats may or may not detect.
The idea here is to verify old files with their legacy Adler32 checksum, plus expected
length. If they don't have an Adler32 (segments_N, jurassic elasticsearch?, its optional
as far as the protocol goes), then at least check the length.
We could improve it for segments_N, its had an embedded CRC32 forever in lucene, but this
gets trickier. Long term, we should also try to also improve tests around here, especially
backwards compat testing, we should test that detected corruptions are handled properly.
Closes#8399
Conflicts:
src/main/java/org/elasticsearch/index/store/Store.java
src/test/java/org/elasticsearch/index/store/StoreTest.java
If a shard (e.g. replica) gets initialized after we indexed the document it gets refreshed internally and we find the doc and its term_vectors, thus the test fails
Fixes an issue where only absolute bytes were taken into account when
kicking off an automatic reroute due to disk usage. Also randomized the
tests to use either an absolute value or a percentage so this is tested.
Also adds logging for each node over the high and low watermark every
time a new cluster info usage is gathered (defaults to every 30
seconds).
Related to #8368Fixes#8367
Today we use busy waiting and sampling when we execute HealthReqeusts
on the master. This is tricky sicne we might sample a not yet fully applied
cluster state and make a decsions base on the partial cluster state. This can
lead to ugly problems since requests might be routed to nodes where shards are
already marked as relocated but on the actual cluster state they are still started.
Yet, this window is very small usually it can lead to ugly test failures.
This commit moves the health request over to a listener pattern that gets the actual
applied cluster state.
Closes#8350
When a node stops, we cancel any ongoing join process. With #8327, we improved this logic and wait for it to complete before shutting down the node. However, the joining thread is part of a thread pool and will not stop until the thread pool is shutdown.
Another issue raised by the unneeded wait is that when we shutdown, we may ping ourselves - which results in an ugly warn level log. We now log all remote exception during pings at a debug level.
Closes#8359
Previously the bulk of our shard recovery code was in a 300-line
anonymous class in `RecoverySource`. This made it difficult to find and
more difficult to read.
This factors out that code into a `ShardRecoveryHandler` class, adding
javadocs for each function and phase of the recovery, as well as
comments explaining some of the more esoteric functions performed during
recovery.
It's hoped that this will help more people understand Elasticsearch's
recovery procedure.
No *major* functionality has changed, only typo corrections, some minor
allocation improvements and logging clarification changes.
If the translog file is corrupted or truncated the stream is never closed
and the filehandle leaks. This commit closes the stream in the case of an
exception.
Today we use the File API for file deletion as well as recursive
directory deletions. This API returns a boolean if operations
are successful while hiding the actual reason why they failed.
The Path API throws and actual exception that might provide better
insights and debug information.
Closes#8366
When a node stops, we cancel any ongoing join process. With #8327, we improved this logic and wait for it to complete before shutting down the node. In our tests we typically shutdown an entire cluster at once, which makes it very likely for nodes to be joining while shutting down. This introduces a race condition where the joinThread.interrupt can happen before the thread starts waiting on pings which causes shutdown logic to be slow. This commits improves by repeatedly trying to stop the thread in smaller waits.
Another side effect of the change is that we are now more likely to ping ourselves while shutting down, we results in an ugly warn level log. We now log all remote exception during pings at a debug level.
Closes#8359
The tests were failing because there was a shard which didn't get any documents and the tests assumed all shards had documents. This commit fixes this assumption