This commit makes queries and filters parsed the same way using the
QueryParser abstraction. This allowed to remove duplicate code that we had
for similar queries/filters such as `range`, `prefix` or `term`.
The mapper listener concept is only now used as a callback to the
MapperService when new fields are added. This change removes the
listeners, instead storing a link to the mapper service in
each doc mapper.
This commit makes create, update and delete operations on an index durable
by default. The user has the option to opt out to use async translog flushes
on a per-index basis by settings `index.translog.durability=request`.
Initial benchmarks running on SSDs have show that indexing is about 7% - 10% slower
with bulk indexing compared to async translog flushes. This change is orthogonal to
the transaction log sync interval and will only sync the transaction log if the operation
has not yet been concurrently synced. Ie. if multiple indexing requests are submitted and
one operations sync call already persists the operations of others only one sync call is executed.
Relates to #10933
The mapper listener abstractions for object and field mappers are used
to notify the mapper service of new fields, as well as collect
all object and field mappers through a set of traversal functions.
This change removes the traversal functions in favor of simple
iteration over subfields of a mapper.
Getting this to work would be a lot of work (creating two different
repositories, having another GPG key, integrating this into our build).
Closes#6498
This is pretty much a workaround for the fact that we simply
close the downstream resources once the shard is closed. This means
the document parser will barf with NPE or something similar while
AlreadyClosedException would be approriate.
Removes the More Like This API, users should now use the More Like This query.
The MLT API tests were converted to their query equivalent. Also some clean
ups in MLT tests.
Closes#10736Closes#11003
Codehaus announced they are shutting down their services: https://www.codehaus.org/
We should remove their repository from our pom as it could cause some errors an useless HTTP calls.
Related to #10939
In some cases it might happen that a mapping which is already available on the
master node is not available yet on the node that holds the primary shard.
This commit changes indexing on the primary shard so that if a dynamic update
is triggered then the index operation is re-tried until required mappings are
available locally (using cluster state observing).
Previously, we were using the "statistical", technically accurate name. Instead, we
should probably use the name that people are familiar with, e.g. "Holt Winters" instead
of "triple exponential". To that end:
- `single_exp` becomes `ewma` (exponentially weighted moving average)
- `double_exp` becomes `holt`
When the `triple_exp` is added, it will be called `holt_winters`.
Current features (eg. update API) and future features (eg. reindex API)
depend on _source. This change locks down the field so that
it can no longer be disabled. It also removes legacy settings
compress/compress_threshold.
closes#8142closes#10915
If we have to do the one-time loading of fieldata, it requires
more permissions than groovy scripts currently have (zero). This
is because of RamUsageEstimator reflection and so on in PagedBytes.
GroovySecurityTests only test a numeric field, so add a string field
to the test (so pagedbytes fielddata gets created etc).
LUCENE-6422 - PackedQuadTree enhancement - was committed in Lucene 5.2 which is now integrated w/ ES 2.0. This eliminates the need to carry our own local lucene.spatial package. This commit removes the now unnecessary files.
When settings sync to 0, we benefit from using the buffered type, no need to change to simple, since we get a chance to fsync multiple operations (for that single operation) and not have to sync for the other ones before returning each one
This commit adds the path of the PID file to the Environment. It also add it to the Security Manager since the PID file is deleted by a shutdown hook when the JVM is exited.
Double.MIN_VALUE does not follow the same semantics as Integer.MIN_VALUE. Namely, it
represents the smallest positive, non-zero value a double can hold. Since the test uses negative
doubles, this can incorrectly find the min/max metric for a set of values.
Instead, Double.NEGATIVE_INFINITY needs to be used, which represents the smallest value possible.
Not strictly necessary, but MAX_VALUE was switched to POSITIVE_INFINITY just to be 100% correct
we already fail the shard in the `onFailure` method if the replica
operation barfs. This additional check has been added lately that
bypasses the clusterstate observer which causes replicas to fail
if the mappings are not yet present.