This commit adds a parent pipeline aggregation that allows
sorting the buckets of a parent multi-bucket aggregation.
The aggregation also offers [from] and [size] parameters
in order to truncate the result as desired.
Closes#14928
We cut over to internal and external IndexReader/IndexSearcher in #26972 which uses
two independent searcher managers. This has the downside that refreshes of the external
reader will never clear the internal version map which in-turn will trigger additional
and potentially unnecessary segment flushes since memory must be freed. Under heavy
indexing load with low refresh intervals this can cause excessive segment creation which
causes high GC activity and significantly increases the required segment merges.
This change adds a dedicated external reference manager that delegates refreshes to the
internal reference manager that then `steals` the refreshed reader from the internal
reference manager for external usage. This ensures that external and internal readers
are consistent on an external refresh. As a sideeffect this also releases old segments
referenced by the internal reference manager which can potentially hold on to already merged
away segments until it is refreshed due to a flush or indexing activity.
In order to avoid churn when applying a new `ClusterState`, there are some checks that compare parts of the old and new states and, if equal, the new object is discarded and the old one reused. Since `ClusterState` updates are now largely diff-based, this code is unnecessary: applying a diff also reuses any old objects if unchanged. Moreover, the code compares the parts of the `ClusterState` using their `version()` values which is not guaranteed to be correct, because of a lack of consensus.
This change removes this optimisation, and tests that objects are still reused as expected via the diff mechanism.
This commit fixes a comment in an index shard test which was inaccurate
after it was copied from another test and not modified to reflect the
reasoning in the test that it was copied into.
When a primary is promoted, rolling the translog generation here makes
simpler reasoning about the relationship between primary terms and
translog generation. Note that this is not strictly necessary for
correctness (e.g., to avoid duplicate operations with the same sequence
number within a single generation).
Relates #27313
* Fixes#19768: scripted_metric _agg parameter disappears if params are provided
* Test case for #19768
* Compare boolean to false instead of negating it
* Added mocked script in ScriptedMetricIT
* Fix test in ScriptedMetricIT for implicit _agg map
* Add limits for ngram and shingle settings (#27211)
Create index-level settings:
max_ngram_diff - maximum allowed difference between max_gram and min_gram in
NGramTokenFilter/NGramTokenizer. Default is 1.
max_shingle_diff - maximum allowed difference between max_shingle_size and
min_shingle_size in ShingleTokenFilter. Default is 3.
Throw an IllegalArgumentException when
trying to create NGramTokenFilter, NGramTokenizer, ShingleTokenFilter
where difference between max_size and min_size exceeds the settings value.
Closes#25887
The `TemplateUpgradeService` allows plugins to register a call back that mutates index templates upon recovery. This is handy for upgrade logic that needs to make sure that an existing index template is updated once the cluster is upgraded to a new version of the plugin (and ES).
Currently, the service has complicated logic to decide which node should perform the upgrade. It will prefer the master node, if it is of the highest version of the cluster and otherwise it will fall back to one of the non-coordinating nodes which are on the latest version. While this attempts to make sure that new nodes can assume their template version is in place (but old node still need to be able to operate under both old and new template), it has an inherent problem in that the master (on an old version) may not be able to process the put template request with the new template - it may miss certain features.
This PR changes the logic to be simpler and always rely on the current master nodes. This comes at the price that new nodes need to operate both with old templates and new. That price is small as they need to operate with old indices regardless of the template. On the flip side we reduce a lot of complexity in what can happen in the cluster.
If an out of memory error is thrown while merging, today we quietly
rewrap it into a merge exception and the out of memory error is
lost. Instead, we need to rethrow out of memory errors, and in fact any
fatal error here, and let those go uncaught so that the node is torn
down. This commit causes this to be the case.
Relates #27265
Some code-paths use anonymous classes (such as NonCollectingAggregator
in terms agg), which messes up the display name of the profiler. If
we encounter an anonymous class, we need to grab the super's name.
Another naming issue was that ProfileAggs were not delegating to the
wrapped agg's name for toString(), leading to ugly display.
This PR also fixes up the profile documentation. Some of the examples were
executing against empty indices, which shows different profile results
than a populated index (and made for confusing examples).
Finally, I switched the agg display names from the fully qualified name
to the simple name, so that it's similar to how the query profiles work.
Closes#26405
The warnings headers have a fairly limited set of valid characters
(cf. quoted-text in RFC 7230). While we have assertions that we adhere
to this set of valid characters ensuring that our warning messages do
not violate the specificaion, we were neglecting the possibility that
arbitrary user input would trickle into these warning headers. Thus,
missing here was tests for these situations and encoding of characters
that appear outside the set of valid characters. This commit addresses
this by encoding any characters in a deprecation message that are not
from the set of valid characters.
Relates #27269
This change adds a new `_split` API that allows to split indices into a new
index with a power of two more shards that the source index. This API works
alongside the `_shrink` API but doesn't require any shard relocation before
indices can be split.
The split operation is conceptually an inverse `_shrink` operation since we
initialize the index with a _syntetic_ number of routing shards that are used
for the consistent hashing at index time. Compared to indices created with
earlier versions this might produce slightly different shard distributions but
has no impact on the per-index backwards compatibility. For now, the user is
required to prepare an index to be splittable by setting the
`index.number_of_routing_shards` at index creation time. The setting allows the
user to prepare the index to be splittable in factors of
`index.number_of_routing_shards` ie. if the index is created with
`index.number_of_routing_shards: 16` and `index.number_of_shards: 2` it can be
split into `4, 8, 16` shards. This is an intermediate step until we can make
this the default. This also allows us to safely backport this change to 6.x.
The `_split` operation is implemented internally as a DeleteByQuery on the
lucene level that is executed while the primary shards execute their initial
recovery. Subsequent merges that are triggered due to this operation will not be
executed immediately. All merges will be deferred unti the shards are started
and will then be throttled accordingly.
This change is intended for the 6.1 feature release but will not support pre-6.1
indices to be split unless these indices have been shrunk before. In that case
these indices can be split backwards into their original number of shards.
While it's not possible to upgrade the Jackson dependencies
to their latest versions yet (see #27032 (comment) for more)
it's still possible to upgrade to the latest 2.8.x version.
We have an hidden setting called `index.queries.cache.term_queries` that disables caching of term queries in the query cache.
Though term queries are not cached in the Lucene UsageTrackingQueryCachingPolicy since version 6.5.
This makes the es policy useless but also makes it impossible to re-enable caching for term queries.
This change appeared in Lucene 6.5 so this setting is no-op since version 5.4 of Elasticsearch
The change in this PR removes the setting and the custom policy.
Only tests should use the single argument Environment constructor. To
enforce this the single arg Environment constructor has been replaced with
a test framework factory method.
Production code (beyond initial Bootstrap) should always use the same
Environment object that Node.getEnvironment() returns. This Environment
is also available via dependency injection.
If the master disconnects from the cluster after initiating snapshot, but just before the snapshot switches from INIT to STARTED state, the snapshot can get indefinitely stuck in the INIT state. This error is specific to v5.x+ and was triggered by keeping the master node that stepped down in the node list, the cleanup logic in snapshot/restore assumed that if master steps down it is always removed from the the node list. This commit changes the logic to trigger cleanup even if no nodes left the cluster.
Closes#27180
For FsBlobStore and HdfsBlobStore, if the repository is read only, the blob store should be aware of the readonly setting and do not create directories if they don't exist.
Closes#21495
After recent changes in InternalStats#doXContentBody the corresponding xContent
output of the parsed aggregation needed to be changed in a similar way.
* Uses norms for exists query if enabled
This change means that for indexes created from 6.1.0, if normas are enabled we will not write the field name to the `_field_names` field and for an exists query we will instead use the NormsFieldExistsQuery which was added in Lucene 7.1.0. If norms are not enabled or if the index was created before 6.1.0 `_field_names` will be used as before.
* Fixes tests
The uid bytes (as the type#id) were needlessly being created even though
they are no longer needed after the move to single type per index. This
commit avoids creating these when parsed documents are constructed.
Relates #27241
We do some accounting in IndexShard that is not necessarily correct since
we maintain two different index readers. This change moves the accounting under
the engine which knows what reader we are refreshing.
Relates to #26972
This local checkpoint tracker uses collections of bit sets to track
which sequence numbers are complete, eventually removing these bit sets
when the local checkpoint advances. However, these bit sets were eagerly
allocated so that if a sequence number far ahead of the checkpoint was
marked as completed, all bit sets between the "last" bit set and the bit
set needed to track the marked sequence number were allocated. If this
sequence number was too far ahead, the memory requirements could be
excessive. This commit opts for a different strategy for holding on to
these bit sets and enables them to be lazily allocated.
Relates #27179
We added an index-level setting for controlling the size of the bit sets
used to back the local checkpoint tracker. This setting is really only
needed to control the memory footprint of the bit sets but we do not
think this setting is going to be needed. This commit removes this
setting before it is released to the wild after which we would have to
worry about BWC implications.
Relates #27191
* Enhances exists queries to reduce need for `_field_names`
Before this change we wrote the name all the fields in a document to a `_field_names` field and then implemented exists queries as a term query on this field. The problem with this approach is that it bloats the index and also affects indexing performance.
This change adds a new method `existsQuery()` to `MappedFieldType` which is implemented by each sub-class. For most field types if doc values are available a `DocValuesFieldExistsQuery` is used, falling back to using `_field_names` if doc values are disabled. Note that only fields where no doc values are available are written to `_field_names`.
Closes#26770
* Addresses review comments
* Addresses more review comments
* implements existsQuery explicitly on every mapper
* Reinstates ability to perform term query on `_field_names`
* Added bwc depending on index created version
* Review Comments
* Skips tests that are not supported in 6.1.0
These values will need to be changed after backporting this PR to 6.x
This query returns documents that match with at least one ore more
of the provided terms. The number of terms that must match varies
per document and is either controlled by a minimum should match
field or computed per document in a minimum should match script.
Closes#26915