Scripts currently share the same list across invocations to getValues. This
caused a bug in script fields where all documents coming from the same segment
would get the same values (basically, for the next document for which script
values have been requested). Scripts now return a fresh new list on every
invocation to `getValues`.
Close#8576
Today we throw a generic ElasticsearchException when a recovery is cancled. This
causes verbose logging and send shard failures and additional unnecessary cluster state
events. We can just throw IndexShardClosedException which prevents the send shard failures
This commit moves all the Translog related code over to the
NIO2 Path API. It also make transaction logs write once since it
never reuses a translog file.
Closes#8611
While in a perfect world we should only ever have 2 circuit breaker
trips, it's possible to get a race condition between the child and the
parent breaker with many threads. Since multiple breaking exceptions are
not actually a bad thing, it's okay to relax the constraints in the
test.
The race conditions are due to no locking inside the breaker logic, to
ensure that it is as low overhead as possible. Even though no locking is
used, we use atomic counters internally to ensure that the "estimated"
numbers for the breakers are never out of sync (which this test still
checks with no leeway).
This allows arbitrary properties to be retrieved from an aggregation tree. The property is specified using the same syntax as the
order parameter in the terms aggregation. If a property path contians a multi-bucket aggregation the property values from each bucket will be returned in an array.
Today, Elasticsearch has a separate merge thread pool checking once
per second (by default) if any merges are necessary, but this is no
longer necessary since we can and do now tell Lucene's
ConcurrentMergeScheduler never to "hard pause" threads when merges
fall behind, since we do our own index throttling.
This change goes back to letting Lucene launch merges as needed, and
removes these two expert settings:
index.merge.force_async_merge
index.merge.async_interval
Now merges kick off immediately instead of waiting up to 1 second
before running.
Closes#8643
This commit ensures that restore operation with wait_for_completion=true doesn't return until all successfully restored shards are started. Before it was returning as soon as restore operation was over, which cause some shards to be unavailable immediately after restore completion.
Fixes#8340
Our iterator over global ordinals is currently incorrect since it does NOT
return -1 (NO_MORE_ORDS) when all ordinals have been consumed. This bug does
not strike immediately with elasticsearch since we always consume ordinals in
a random-access fashion. However it strikes when consuming ordinals through
Lucene helpers such as DocValues#docsWithField.
Close#8580
Today we try to delete the index directory if all shard locks have been
acquired. Yet, if this fails due to still running recoveries etc. We might
re-import the index as dangeling which also can happen if the node is restarted.
In contrast to the shard direcotries we can safely delete the metastate which is used
to import dangling indices while leaving the shard directories untouched.
This fix adds a simple consistency check that intersection edges appear pairwise. Polygonal boundary tests were passing (false positive) on the Eastern side of the dateline simply due to the initial order (edge direction) of the intersection edges. Polygons in the Eastern hemispehere (which were not being tested) were correctly failing inside of JTS due to an attempt to connect incorrect intersection edges (that is, edges that were not even intersections). While this patch fixes issue/8467 (and adds broader test coverage) it is not intented as a long term solution. The mid term fix (in work) will refactor all geospatial computational geometry to use ENU / ECF coordinate systems for higher accuracy and eliminate brute force mercator checks and conversions.
Closes#8467
Always use the LocalGateway* equivalents
We already check in the LocalGateway whether a node is a client node, or
is not master-eligible, and skip writing the state there. This allows us
to remove this code that was previously used only for tribe nodes (which
are not master eligible anyway and wouldn't write state) and in
tests (which can shake more bugs out)
Elasticsearch no longer unlocks the Lucene index on startup (this was
dangerous, and could possibly lead to corruption).
Added the new serbian_normalization TokenFilter from Lucene.
NoLockFactory is no longer supported (index.store.fs.fs_lock = none),
and if you have a typo in your fs_lock you'll now hit a StoreException
instead of silently using NoLockFactory.
Closes#8588
We started to use the lucene CRC32 checksums instead of the legacy Adler32
in `v1.3.0` which was the first version using lucene `4.9.0`. We can safely
assume that if the segment was written with this version that checksums
from lucene can be used even if the legacy checksum claims that it has a Adler32
for a given file / segment.
Closes#8587
Conflicts:
src/main/java/org/elasticsearch/index/store/Store.java
src/test/java/org/elasticsearch/index/store/StoreTest.java
Date math rounding currently works by rounding the date up or down based
on the scope of the rounding. For example, if you have the date
`2009-12-24||/d` it will round down to the inclusive lower end
`2009-12-24T00:00:00.000` and round up to the non-inclusive date
`2009-12-25T00:00:00.000`.
The range endpoint semantics work as follows:
* `gt` - round D down, and use > that value
* `gte` - round D down, and use >= that value
* `lt` - round D down, and use <
* `lte` - round D up, and use <=
There are 2 problems with these semantics:
* `lte` ends up including the upper value, which should be non-inclusive
* `gt` only excludes the beginning of the date, not the entire rounding scope
This change makes the range endpoint semantics symmetrical. First, it
changes the parser to round up and down using the first (same as before)
and last (1 ms less than before) values of the rounding scope. This
makes both rounded endpoints inclusive. The range endpoint semantics
are then as follows:
* `gt` - round D up, and use > that value
* `gte` - round D down, and use >= that value
* `lt` - round D down, and use < that value
* `lte` - round D up, and use <= that value
closes#8424closes#8556
We speak of the term vectors of a document, where each field has an associated
stored term vector. Since by default we are requesting all the term vectors of
a document, the HTTP request endpoint should rather be called `_termvectors`
instead of `_termvector`. The usage of `_termvector` is now deprecated, as
well as the transport client call to termVector and prepareTermVector.
Closes#8484
Today there is a race condition between the actual deletion of
the shard and the release of the lock in the store. This race can cause
rare imports of dangeling indices if the cluster state update loop
tires to import the dangeling index in that particular windonw. This commit
adds more safety to the import of dangeling indices and removes the race
condition by holding on to the lock on store closing while the listener
is notified.