Backport of #64454
- Add LongRareTerms and StringRareTerms to the DefaultNamedXContents,
ensure that the response of RareTerms aggregation can be parsed
correctly.
- Add testSearchWithRareTermsAgg method to test the response of
RareTerms aggregation can be parsed correctly.
- Add some test code to ensure the AggregationsTests can execute
successfully.
Co-authored-by: bellengao <gbl_long@163.com>
We were correctly dealing with boosts that had an effect, but mappers
that had a silently accepted but ignored boost parameter were throwing
an error instead of continuing to ignore the boost but emitting a
warning.
Fixes#64982
Currently a rejected execution exception can be swallowed when async
actions return during transport bulk actions. This includes scenarios
where we went async to perform ingest pipelines or index creation. This
commit resolves the issue by propagating a rejected exception.
10s is pretty tight here considering that some x-pack related tests
might have a running create-index tasks in the backgroud after a test
that require multiple CS updates and can randomly take a few seconds on
slow IO etc.
closes#64580
This change fixes a bug introduced in #61779 that uses a compound order to
compare buckets when merging. The bug is triggered when the compound order
uses a primary sort ordered by key (asc or desc).
This commit ensures that we always extract the primary sort when comparing keys
during merging.
The PR is marked as no-issue since the bug has not been released in any official version.
We have to wait for no more operations here not for `1`. This mostly worked
because the test thread would add the listener quickly enough so that it sees the
state where either the snapshot or clone but not both have already finished
but randomly the test thread would be slow and time out on a state without snaphots in it.
We had and an error when serializing fully reduced scripted metrics.
Small typo and sever lack of tests..... Anyway, this fixed the one
character typo and adds a bunch more tests.
Today indexing to a shard with 2147483519 documents will fail that
shard. We should check the number of documents and reject the write
requests instead.
Closes#51136
This fixes a gap in testing and a bug that can occur in various forms:
When we would start a snapshot or clone related to a shard that was done
snapshotting/cloning but its overall operation was not yet finalized
at the time of starting the operation, we would base the operation off of
the wrong generation. This would not cause a corrupted repo, but would
cause the operation to be `PARTIAL`.
This commit fixes the state machine to take into account the correct generation
in this case.
Closes#63498
This PR implements value fetching for the following field types:
* `text` phrase and prefix subfields
* `search_as_you_type`, plus its subfields
* `token_count`, which is implemented by fetching doc values
Supporting these types helps ensure that retrieving all fields through
`"fields": ["*"]` doesn't fail because of unsupported value fetchers.
Plugins are loaded in isolated child class loaders of the root class loader. However, some libraries depend on the context class loader being set. This commit sets the context class loader for the duration of calling each plugins constructor.
relates #52320
Co-authored-by: Ryan Ernst <ryan@iernst.net>
This PR adds deprecation warnings when accessing System Indices via the REST layer. At this time, these warnings are only enabled for Snapshot builds by default, to allow projects external to Elasticsearch additional time to adjust their access patterns.
Deprecation warnings will be triggered by all REST requests which access registered System Indices, except for purpose-specific APIs which access System Indices as an implementation detail a few specific APIs which will continue to allow access to system indices by default:
- `GET _cluster/health`
- `GET {index}/_recovery`
- `GET _cluster/allocation/explain`
- `GET _cluster/state`
- `POST _cluster/reroute`
- `GET {index}/_stats`
- `GET {index}/_segments`
- `GET {index}/_shard_stores`
- `GET _cat/[indices,aliases,health,recovery,shards,segments]`
Deprecation warnings for accessing system indices take the form:
```
this request accesses system indices: [.some_system_index], but in a future major version, direct access to system indices will be prevented by default
```
Determines the shard size of shards before allocating shards that are
recovering from snapshots. It ensures during shard allocation that the
target node that is selected as recovery target will have enough free
disk space for the recovery event. This applies to regular restores,
CCR bootstrap from remote, as well as mounting searchable snapshots.
The InternalSnapshotInfoService is responsible for fetching snapshot
shard sizes from repositories. It provides a getShardSize() method
to other components of the system that can be used to retrieve the
latest known shard size. If the latest snapshot shard size retrieval
failed, the getShardSize() returns
ShardRouting.UNAVAILABLE_EXPECTED_SHARD_SIZE. While
we'd like a better way to handle such failures, returning this value
allows to keep the existing behavior for now.
Note that this PR does not address an issues (we already have today)
where a replica is being allocated without knowing how much disk
space is being used by the primary.
Co-authored-by: Yannick Welsch <yannick@welsch.lu>
MapperService carries a lot of weight and is only used to determine if loading of field data for the id field is enabled, which can be done in a different way.
Just a few spots where we can dry up these tests using the snapshot test infrastructure
in core that I found while studying the existing searchable snapshot tests.
* Just some obvious drying up of these super complex tests.
* Mainly just shortening the diff of #61839 here by moving test utilities
to the abstract test case.
Also, making use of the now available functionality to simplify existing tests
and improve logging in them.
For runtime fields, we will want to do all search-time interaction with
a field definition via a MappedFieldType, rather than a FieldMapper, to
avoid interfering with the logic of document parsing. Currently, fetching
values for runtime scripts and for building top hits responses need to
call a method on FieldMapper. This commit moves this method to
MappedFieldType, incidentally simplifying the current call sites and freeing
us up to implement runtime fields as pure MappedFieldType objects.
Splitting some tests out of this class that has become a catch-all
for random snapshot related tests into either existing suits that fit
better for these tests or one of two new suits to prevent timeouts
in extreme cases (e.g. `WindowsFS` + many nodes + multiple data paths per node).
No other changes to tests were made whatsoever.
Closes#61541
Introduce 64-bit unsigned long field type
This field type supports
- indexing of integer values from [0, 18446744073709551615]
- precise queries (term, range)
- precise sort and terms aggregations
- other aggregations are based on conversion of long values
to double and can be imprecise for large values.
Backport for #60050Closes#32434
This commit adds a mechanism to MapperTestCase that allows implementing
test classes to check that their parameters can be updated, or throw conflict
errors as advertised. Child classes override the registerParameters method
and tell the passed-in UpdateChecker class about their parameters. Simple
conflicts can be checked, using the existing minimal mappings as a base to
compare against, or alternatively a particular initial mapping can be provided
to check edge cases (eg, norms can be updated from true to false, but not
vice versa). Updates are registered with a predicate that checks that the update
has in fact been applied to the resulting FieldMapper.
Fixes#61631
Most of our field types have the same implementation for their `existsQuery` method which relies on doc_values if present, otherwise it queries norms if available or uses a term query against the _field_names meta field. This standard implementation is repeated in many different mappers.
There are field types that only query doc_values, because they always have them, and field types that always query _field_names, because they never have norms nor doc_values. We could apply the same standard logic to all of these field types as `MappedFieldType` has the knowledge about what data structures are available.
This commit introduces a standard implementation that does the right thing depending on the data structure that is available. With that only field types that require a different behaviour need to override the existsQuery method.
At the same time, this no longer forces subclasses to override `existsQuery`, which could be forgotten when needed. To address this we introduced a new test method in `MapperTestCase` that verifies the `existsQuery` being generated and its consistency with the available data structures.
This commit adds a dedicated threadpool for system index write
operations. The dedicated resources for system index writes serves as
a means to ensure that user activity does not block important system
operations from occurring such as the management of users and roles.
Backport of #61655
`RepositoriesService#doClose` was never called which lead to
mock repositories not unblocking until the `ThreadPool` interrupts
all threads. Thus stopping a node that is blocked on a mock repository operation wastes `10s`
in each test that does it (which is quite a few as it turns out).
The dense vector field is not aggregatable although it produces fielddata through its BinaryDocValuesField. It should pass up hasDocValues set to true to its parent class in its constructor, and return isAggregatable false. Same for the sparse vector field (only in 7.x).
This may not have consequences today, but it will be important once we try to share the same exists query implementation throughout all of the mappers with #57607.
We removed index-time boosting back in 5x, and we no longer document the 'boost'
parameter on any of our mapping types. However, it is still possible to define an
index-time boost on a field mapper for a surprisingly large number of field types, and
they even have an effect (sometimes, on some queries).
As a first step in finally removing all traces of index time boosting, this comment emits
a deprecation warning whenever a boost parameter is found on a mapping definition.
Today when a snapshot restore is aborted (for example when the index is
explicitly deleted) while the restoration of the files from the repository has
already started the file restores are not interrupted. It means that Elasticsearch
will continue to read the files from the repository and will continue to write
them to disk until all files are restored; the store will then be closed and
files will be deleted from disk at some point but this can take a while. This
will also take some slots in the SNAPSHOT thread pool too. The Recovery
API won't show any files actively being recovered, the only notable
indicator would be the active threads in the SNAPSHOT thread pool.
This commit adds a check before reading a file to restore and before
writing bytes on disk so that a closing store can be detected more
quickly and the file recovery process aborted. This way the file
restores just stops and for most of the repository implementations
it means that no more bytes are read (see #62370 for S3), finishing
threads in the SNAPSHOT thread pool more quickly too.
In #57666 we changed when null_value was parsed for ip and date fields. Previously,
the null value was stored as a string, and parsed into a date or InetAddress whenever
a document containing a null value was encountered. Now, the values are parsed when
the mappings are built, which means that bad values are detected up front; if you try and
add a mapping with a badly-parsed ip or date for a null_value, the mapping will be
rejected.
This causes problems for upgrades in the case when you have a badly-formed null_value
in a pre-7.9 cluster. This commit fixes the upgrade case by changing the logic to only
logging a warning on the badly formed value, replicating the earlier behaviour.
Fixes#62363
Backport of #62484 to 7.x branch.
It is possible in mixed version clusters (nodes prior to 7.10)
that a 404 is returned when wiping all data streams.
This is because there are no data streams and
the coordinator node is on a version that doesn't
mark the delete request for wildcard usage.
This implements the `fields` API in `_search` for runtime fields using
doc values. Most of that implementation is stolen from the
`docvalue_fields` fetch sub-phase, just moved into the same API that the
`fields` API uses. At this point the `docvalue_fields` fetch phase looks
like a special case of the `fields` API.
While I was at it I moved the "which doc values sub-implementation
should I use for fetching?" question from a bunch of `instanceof`s to a
method on `LeafFieldData` so we can be much more flexible with what is
returned and we're not forced to extend certain classes just to make the
fetch phase happy.
Relates to #59332
* Add "synthetics-*-*" templates for synthetics fleet data
For the Elastic Agent we currently have `logs` and `metrics`, however, synthetic data doesn't belong
with those and thus we should have a place for it to live. This would be data reported from
heartbeat and under the 'monitoring' category.
This commit adds a composable index template for `synthetics-*-*` indices similar to the work in
#56709 and #57629.
Resolves#61665
PointInTimeBuilder is a ToXContentObject yet it does not print out a whole object (it is rather a fragment). Also, when it is printed out as part of SearchSourceBuilder, an error is thrown because pit should be wrapped into its own object.
This commit fixes this and adds tests for it.