This default type has been inherited from its ancestor, the (non-paged) recycler whose memory
usage was unbounded and required soft references to make sure it could release memory eventually.
On the contrary, the page cache recycler memory usage is bounded so we could remove soft
references in order to remove load on the garbage collector.
Note: the cache type is already randomized in integration tests.
Close#6320
Mustache extracts the key/value pairs for parameter substitution from
objects and maps but it's decided on the first execution. We need to
make sure if the params are null we pass an empty map to ensure we
bind the map based extractor
Closes#6318
At the moment plain highligher only uses an analyzer defined for on the type
level. However, during the indexing stage it is possible to define analyzer on
per document level, for example mapping '_analyzer' to another field, containing
required name. This commit attempts to make sure that highlighting works
correctly in this scenario.
Closes#5497
Guava's caches have overflow issues around 32GB with our default segment
count of 16 and weight of 1 unit per byte. We give them 100MB of headroom
so 31.9GB.
This limits the sizes of both the field data and filter caches, the two
large guava caches.
Closes#6268
Because json objects are unordered this also adds an explicit order syntax
that looks like
"highlight": {
"fields": [
{"title":{ /*params*/ }},
{"text":{ /*params*/ }}
]
}
This is not useful for any of the builtin highlighters but will be useful
in plugins.
Closes#4649
When multiple date formats are specified using the || syntax in the field mappings the date_histogram aggregation breaks. This is because we are getting a parser rather than a printer from the date formatter for the object we use to convert the DateTime values back into Strings. Simple fix to get the printer from the date format and test to back it up
Closes#6239
Assertion was triggered for percolating documents with nested object
in mapping if the document did not actually contain a nested object.
Reason:
MultiDocumentPercolatorIndex checks if the number of documents is
actualu >1. Instead we can just use the SingleDocumentPercolatorIndex
in this case.
closes#6263
The `FieldDataWeighter` allowed to use a concrete subclass of the caches
generic type to be used that causes ClassCastException and also trips the
CirciutBreaker to not be decremented appropriately.
This was tripped by settings randomization also part of this commit.
Closes#6260
Today we check if the DocIdSet we filter by is `fast` but the check fails
if the DocIdSet if wrapped in an `ApplyAcceptedDocsFilter` which is always
the case if the index has deleted documents. This commit unwraps
the original DocIdSet in the case of deleted documents.
Closes#6247
AllTokenStream, used to index the _all field, adds some overhead, but
it's not necessary when no fields were boosted or when positions are
not indexed the _all field.
Closes#6187Closes#6219
This adds support for sending a list of benchmark names and/or wildcard
patterns when submitting an abort request. Standard wildcards such as:
"*", "*xxx", and "xxx*" are supported. Multiple benchmark names and
wildcard patterns can be submitted together as a comma-separated list
from the REST API or as a string array from the Java API.
Closes#6185
This change adds a new cluster state that waits for the replication of a shard to finish before starting snapshotting process. Because this change adds a new snapshot state, an pre-1.2.0 nodes will not be able to join the 1.2.0 cluster that is currently running snapshot/restore operation.
Closes#5531
This is yet another safety guard to make sure we don't delete data if the local copy is the only one (even if it's not part of the cluster state any more)
Closes#6191
In some places we want to delay the start of a shard recovery because the source node is not ready to receive. At the moment the retry logic ignores the time delay parameter (`retryAfter`) causing a busy waiting like scenario. This is fixed in this commit.
Closes#6226