This commit refactors the o.e.bootstrap package to o.opensearch.bootstrap. All
references throughout the code are also refactored.
Signed-off-by: Nicholas Knize <nknize@amazon.com>
This commit refactors the remaining o.e.cluster packages to
o.opensearch.cluster. All references throughout the codebase are also
refactored.
Signed-off-by: Nicholas Knize <nknize@amazon.com>
This commit refactors the remaining o.e.action.support subpackages to
o.opensearch.support. All references throughout the codebase are also refactored
Signed-off-by: Nicholas Knize <nknize@amazon.com>
Refactoring:
- rename `org.elasticsearch.search.builder` to `org.opensearch.search.builder`
- rename `org.elasticsearch.search.collapse` to `org.opensearch.search.collapse`
- rename `org.elasticsearch.search.dfs` to `org.opensearch.search.dfs`
- rename `org.elasticsearch.search.lookup` to `org.opensearch.search.lookup`
- rename `org.elasticsearch.search.lookup` to `org.opensearch.search.lookup`
- rename `org.elasticsearch.search.rescore` to `org.opensearch.search.rescore`
- rename `org.elasticsearch.search.searchafter` to `org.opensearch.search.searchafter`
- rename `org.elasticsearch.search.slice` to `org.opensearch.search.slice`
- rename `org.elasticsearch.search.sort` to `org.opensearch.search.sort`
Signed-off-by: Rabi Panda <adnapibar@gmail.com>
Refactor the transport package in the server module to rename the package from `org.elasticsearch.transport` to `org.opensearch.transport`
Signed-off-by: Rabi Panda <adnapibar@gmail.com>
This commit refactors o.e.Version to o.opensearch.Version. This is retained in a
single commit to serve as a reference for re-versioning the opensearch codebase
from legacy 7.10 to 1.0.
Signed-off-by: Nicholas Knize <nknize@amazon.com>
This commit refactors all OpenSearch classes in the root server package to
o.opensearch. All references throughout the codebase are also refactored.
Signed-off-by: Nicholas Knize <nknize@amazon.com>
This commit refactors the o.e.cli and o.e.client packages from elasticsearch to
o.opensearch.cli and o.opensearch.client packages in the server module,
respectively.
Signed-off-by: Nicholas Knize <nknize@amazon.com>
This commit refactors the following subpackages:
* o.e.cluster.health
* o.e.cluster.metadata
* o.e.cluster.node
to o.opensearch.cluster.*. All other references throughout the codebase are
updated.
Signed-off-by: Nicholas Knize <nknize@amazon.com>
Refactor the repositories package in the server module to rename the package from `org.elasticsearch.repositories` to `org.opensearch.repositories`
Signed-off-by: Rabi Panda <adnapibar@gmail.com>
This commit refactors the following:
* o.e.cluster.ack
* o.e.cluster.action
* o.e.cluster.block
* o.e.cluster.coordination
to o.opensearch package. all other references are also refactored.
Signed-off-by: Nicholas Knize <nknize@amazon.com>
This commit refactors all classes in o.e.cluster to o.opensearch.cluster.
Refereences throughtout the code base are updated.
Signed-off-by: Nicholas Knize <nknize@amazon.com>
Refactor the package`org.elasticsearch.script` in server module to rename it to`org.opensearch.script`.
Signed-off-by: Rabi Panda <adnapibar@gmail.com>
Refactor the `libs/grok` module to rename the package name from `org.elasticsearch.grok` to `org.opensearch.grok` as part of the rename to OpenSearch work.
Signed-off-by: Rabi Panda <adnapibar@gmail.com>
Refactor the `libs/dissect` module to rename the package name from `org.elasticsearch.dissect` to `org.opensearch.dissect` as part of the rename to OpenSearch work.
Signed-off-by: Rabi Panda <adnapibar@gmail.com>
Refactor the libs/ssl-config module to rename the package names from`org.elasticsearch.common.ssl` to `org.opensearch.common.ssl`.
Signed-off-by: Rabi Panda <adnapibar@gmail.com>
Refactor the server/tasks package to rename the package names from`org.elasticsearch.tasks` to `org.opensearch.tasks`.
Signed-off-by: Rabi Panda <adnapibar@gmail.com>
Refactor the server/threadpool package to rename the package names from`org.elasticsearch.threadpool` to `org.opensearch.threadpool`.
Signed-off-by: Rabi Panda <adnapibar@gmail.com>
This commit refactors the classes in o.e.action.support to
o.opensearch.action.support. The remaining directories will be refactored in a
separate commit.
Signed-off-by: Nicholas Knize <nknize@amazon.com>
Refactor `server/snapshots` to rename the package names from `org.elasticsearch.snapshots` to `org.opensearch.snapshots` as part of the rename to OpenSearch work.
Signed-off-by: Rabi Panda <adnapibar@gmail.com>
The file was renamed but git is instead reporting a file deletion. This commit reverts the deletion. We will create a separate PR to renaming the file.
Signed-off-by: Rabi Panda <adnapibar@gmail.com>
This commit refactors all classes in o.e.action.bulk to o.opensearch.action.bulk
all references throughout the rest of the codebase are updated.
Signed-off-by: Nicholas Knize <nknize@amazon.com>
This commit refactors all classes in o.e.action.admin.cluster to
org.opensearch.action.admin.cluster. References are updated
throughout the codebase.
Signed-off-by: Nicholas Knize <nknize@amazon.com>
This commit refactors top level classes in o.e.action to o.opensearch.action.
References throughout the rest of the codebase have been updated.
Signed-off-by: Nicholas Knize <nknize@amazon.com>
This commit refactors o.e.action.admin.indices package to
o.opensearch.action.admin.indices. References through out the codebase have been
updated to reflect the new package location.
Signed-off-by: Nicholas Knize <nknize@amazon.com>
[Rename] modules/ingest-user-agent
Refactor ingest-user-agent module as part of the Elasticsearch to OpenSearch renaming.
Signed-off-by: Rabi Panda <adnapibar@gmail.com>
This commit refactors ElasticsearchParseException class in the server module to
OpenSearchParseException. References and usages throughout the rest of the
codebase are fully refactored.
Signed-off-by: Nicholas Knize <nknize@amazon.com>
This commit refactors the ElasticsearchDirectoryReader class located in the
server module to OpenSearchDirectoryReader. References and usages, along with
method names, throughout the rest of the codebase are fully refactored.
Signed-off-by: Nicholas Knize <nknize@amazon.com>
This commit refactors the ElasticsearchStatusException in the server module to
OpenSearchStatusException. References and usages throughout the rest of the
codebase are fully refactored.
Signed-off-by: Nicholas Knize <nknize@amazon.com>
This commit refactors ElasticsearchSecurityException class in the server module
to OpenSearchSecurityException. References and usages throughout the rest of the
codebase are fully refactored.
Signed-off-by: Nicholas Knize <nknize@amazon.com>
This commit refactors the ElasticsearchClient class located in the server module to
OpenSearchClient. References and usages throughout the rest of the codebase are
fully refactored.
Signed-off-by: Nicholas Knize <nknize@amazon.com>
This commit refactors the ElasticsearchException class located in the server module
to OpenSearchException. References and usages throughout the rest of the
codebase are fully refactored.
Signed-off-by: Nicholas Knize <nknize@amazon.com>
A seed was hit in (#43157) that caused mutateInstance to generate an identical
instance. This change prevents that.
Signed-off-by: Peter Nied <petern@amazon.com>
This change fixes problem when using space or tab as a separator in CSV processor - we check if current character is separator before we check if it is whitespace.
This also improves tests to always check all combinations of separators and quotes.
Closes#67013
When removing the "lexer hack" to remove type context from the lexer, static inner class resolution
wasn't properly accounted for. This change adds code to handle static inner class resolution.
This PR removes outdated overrides in some tests that prevent them from testing
older index versions. Also removes an old comment + logic from
AggregatorFactoriesTests.
In the refactoring of TextFieldMapper, we lost the ability to define
a default search or search_quote analyzer in index settings. This
commit restores that ability, and adds some more comprehensive
testing.
Fixes#65434
The search-as-you-type mapper was using an incorrect default analyzer when
creating its merge builder, which meant that a configured analyzer would not
be included in serialization. This in turn resulted in the master node
getting an incorrect configuration, and search-as-you-type fields always
using a standard analyzer.
This has already been fixed via subsequent refactorings in 7.11 and master,
so this fix is for 7.10 only.
Resolves#65319
This change fixes a bug where when doing compound assignment involving String concatenation, the
right-hand side will fail to cast to String appropriately and throw a ClassCastException.
This reverts a change where null-safe was enhanced to cause a compile-time error instead of a run-
time error when the target value was a primitive type. The reason for the reversion is consistency
across def/non-def types and versions. I've added a follow up issue to fix this behavior in general
(#65098).
We were correctly dealing with boosts that had an effect, but mappers
that had a silently accepted but ignored boost parameter were throwing
an error instead of continuing to ignore the boost but emitting a
warning.
Fixes#64982
Now that we're consistently using `cat_match` to filter which shards we
run on we can get this confusing case:
1. You have a search with, say, a range and a sub-agg.
2. That search has a query that `can_match` can recognize will match no
docs. On *any* shard.
3. So we dutifully run it on a single shard so it can produce the
"empty" aggs.
4. The shard we pick happens to not have the target of the range mapped.
5. This kicks in the special range aggregator that doesn't collect any
documents.
6. Before this commit, that range aggregator *also* never produced any
sub-aggs.
So, without this change, it was quite possible for a search that
happened to match no documents to "throw away" the sub-aggs of a range
and a few other aggs.
We've had this problem for a long, long time but it is more confusing
now because `can_match` is really kicking in and causing us to see cases
where it looks like you are targeting a lot of shards but you really are
only targeting a couple. It used to be that to get the "no sub-aggs"
behavior you had to explicitly target only shards that didn't map the
target field of the `range` agg. And, like, in that case it isn't too
bad because you targeted a sort of degenerate shard. But now that
`can_match` is doing its thing you can end up with the confusing steps
above. It took me several hours to track down what what happening I know
how the individual pieces of all of this works. It took four hours to
figure out how they fit together in this case....
Anyway! This replaces all the aggregator implementations that throw out
the sub-aggregators with ones that keep them. I think this'll be less
confusing in the future.
Closes#64142
This commit updates the list of system index names to be complete and
correct for Kibana and APM. The pattern `.kibana*` is too inclusive for
system indices and actually includes the
`.kibana-event-log-${version}-${int}` pattern for the Kibana event log,
which should only be hidden and not a system index. Additionally, the
`.apm-custom-link` index was not included in the list of system
indices. Finally, the reporting pattern has been updated to match that
of the permissions given to the kibana_system role.
Backport of #63950
* Add APM index to Kibana system indices, making it
accessible through the _kibana endpoint and giving it the
same access privileges as the other Kibana system indices.
* Parameterize kibana system index tests by index name
Backport of #63756
Co-authored-by: William Brafford <williamrandolphbrafford@gmail.com>
In #57892 I broke *some* sub-aggregations inside of the `parent` and
`child` aggregator, specifically any sub-aggregations that do work in
the `postCollect` phase. This fixes it by delaying the post collect
phase of aggs under `parent` and `child` until `beforeBuildingBuckets`
because, well, we haven't done *any* collection until after that phase.
This PR implements value fetching for the following field types:
* `text` phrase and prefix subfields
* `search_as_you_type`, plus its subfields
* `token_count`, which is implemented by fetching doc values
Supporting these types helps ensure that retrieving all fields through
`"fields": ["*"]` doesn't fail because of unsupported value fetchers.
This PR adds factory methods for the most common implementations:
* `SourceValueFetcher.identity` to pass through the source value untouched.
* `SourceValueFetcher.toString` to simply convert the source value to a string.
When constructing a value fetcher, the 'parsesArrayValue' flag must match
`FieldMapper#parsesArrayValue`. However there is nothing in code or tests to
help enforce this.
This PR reworks the value fetcher constructors so that `parsesArrayValue` is
'false' by default. Just as for `FieldMapper#parsesArrayValue`, field types must
explicitly set it to true and ensure the behavior is covered by tests.
Follow-up to #62974.
An invalid void expression type from a null safe operator caused ClassFormatError for the script Map
x= ['0': 0]; x?.0 > 1. This change sets and propagates the correct expression type for the null safe
operator to be written out.
This PR adds deprecation warnings when accessing System Indices via the REST layer. At this time, these warnings are only enabled for Snapshot builds by default, to allow projects external to Elasticsearch additional time to adjust their access patterns.
Deprecation warnings will be triggered by all REST requests which access registered System Indices, except for purpose-specific APIs which access System Indices as an implementation detail a few specific APIs which will continue to allow access to system indices by default:
- `GET _cluster/health`
- `GET {index}/_recovery`
- `GET _cluster/allocation/explain`
- `GET _cluster/state`
- `POST _cluster/reroute`
- `GET {index}/_stats`
- `GET {index}/_segments`
- `GET {index}/_shard_stores`
- `GET _cat/[indices,aliases,health,recovery,shards,segments]`
Deprecation warnings for accessing system indices take the form:
```
this request accesses system indices: [.some_system_index], but in a future major version, direct access to system indices will be prevented by default
```
MapperService carries a lot of weight and is only used to determine if loading of field data for the id field is enabled, which can be done in a different way.
In #62509 we already plugged faster sequential access for stored fields in the fetch phase.
This PR now adds using the potentially better field reader also in SourceLookup.
Rally exeriments are showing that this speeds up e.g. when runtime fields that are using
"_source" are added e.g. via "docvalue_fields" or are used in queries or aggs.
Closes#62621
* Setting `script.painless.regex.enabled` has a new option,
`use-factor`, the default. This defaults to using regular
expressions but limiting the complexity of the regular
expressions.
In addition to `use-factor`, the setting can be `true`, as
before, which enables regular expressions without limiting them.
`false` totally disables regular expressions, which was the
old default.
* New setting `script.painless.regex.limit-factor`. This limits
regular expression complexity by limiting the number characters
a regular expression can consider based on input length.
The default is `6`, so a regular expression can consider
`6` * input length number of characters. With input
`foobarbaz` (length `9`), for example, the regular expression
can consider `54` (`6 * 9`) characters.
This reduces the impact of exponential backtracking in Java's
regular expression engine.
* add `@inject_constant` annotation to whitelist.
This annotation signals that a compiler settings will
be injected at the beginning of a whitelisted method.
The format is `argnum=settingname`:
`1=foo_setting 2=bar_setting`.
Argument numbers must start at one and must be sequential.
* Augment
`Pattern.split(CharSequence)`
`Pattern.split(CharSequence, int)`,
`Pattern.splitAsStream(CharSequence)`
`Pattern.matcher(CharSequence)`
to take the value of `script.painless.regex.limit-factor` as a
an injected parameter, limiting as explained above when this
setting is in use.
Fixes: #49873
Backport of: 93f29a4
This change makes Location a final member of IRNode as opposed to possibly changing it. This
ensures that all ir nodes have a Location for error information upon creation that cannot be updated
so each node can be tracked as where it came from originally.
We only ever use this with `XContentParser` no need to make it inline
worse by forcing the lambda and hence dynamic callsite here.
=> Extraced the exception formatting code path that is likely very cold
to a separate method and removed the lambda usage in hot loops by simplifying
the signature here.
For runtime fields, we will want to do all search-time interaction with
a field definition via a MappedFieldType, rather than a FieldMapper, to
avoid interfering with the logic of document parsing. Currently, fetching
values for runtime scripts and for building top hits responses need to
call a method on FieldMapper. This commit moves this method to
MappedFieldType, incidentally simplifying the current call sites and freeing
us up to implement runtime fields as pure MappedFieldType objects.
* Add System Indices check to AutoCreateIndex
By default, Elasticsearch auto-creates indices when a document is
submitted to a non-existent index. There is a setting that allows users
to disable this behavior. However, this setting should not apply to
system indices, so that Elasticsearch modules and plugins are able to
use auto-create behavior whether or not it is exposed to users.
This commit constructs the AutoCreateIndex object with a reference to
the SystemIndices object so that we bypass the check for the user-facing
autocreate setting when it's a system index that is being autocreated.
We also modify the logic in TransportBulkAction to make sure that if a
system index is included in a bulk request, we don't skip the
autocreation step.
Currently we read in 64KB blocks from the network. When TLS is not
enabled, these bytes are normally passed all the way to the application
layer (some exceptions: compression). For the HTTP layer this means that
these bytes can live throughout the entire lifecycle of an indexing
request.
The problem is that if the reads from the socket are small, this means
that 64KB buffers can be consumed by 1KB or smaller reads. If the socket
buffer or TCP buffer sizes are small, the leads to massive memory
waste. It has been identified as a major source of OOMs on coordinating
nodes as Elasticsearch easily exhausts the heap for these network bytes.
This commit resolves the problem by placing a handler after the TLS
handler to copy these bytes to a more appropriate buffer size as
necessary. This comes after TLS, because TLS is a framing layer which
often resolves this problem for us (the 64KB buffer will be decoded
into a more appropriate buffer size). However, this extra handler will
solve it for the non-TLS pipelines.
This adds the network property from the MaxMind Geo ASN database.
This enables analysis of IP data based on the subnets that MaxMind have
previously identified for ASN networks.
closes#60942
Co-authored-by: Peter Ansell <p_ansell@yahoo.com>
Unmute DeleteByQueryConcurrentTests
testConcurrentDeleteByQueriesOnDifferentDocs test.
LUCENE-9449 introduced a bug in sorting on _doc,
which resulted in failure of this test. As Lucene bug
has been fixed, this reenables the test.
Closes#62609
This converts RankFeatureFieldMapper, RankFeaturesFieldMapper,
SearchAsYouTypeFieldMapper and TokenCountFieldMapper to
parametrized forms. It also adds a TextParams utility class to core
containing functions that help declare text parameters - mainly shared
between SearchAsYouTypeFieldMapper and KeywordFieldMapper at
the moment, but it will come in handy when we convert TextFieldMapper
and friends.
Relates to #62988
Currently Netty will batch compression an entire HTTP response
regardless of its content size. It allocates a byte array at least of
the same size as the uncompressed content. This causes issues with our
attempts to remove humungous G1GC allocations. This commit resolves the
issue by split responses into 128KB chunks.
This has the side-effect of making large outbound HTTP responses that
are compressed be send as chunked transfer-encoding.
Currently we duplicate our specialized cors logic in all transport
plugins. This is unnecessary as it could be implemented in a single
place. This commit moves the logic to server. Additionally it fixes a
but where we are incorrectly closing http channels on early Cors
responses.
Introduce 64-bit unsigned long field type
This field type supports
- indexing of integer values from [0, 18446744073709551615]
- precise queries (term, range)
- precise sort and terms aggregations
- other aggregations are based on conversion of long values
to double and can be imprecise for large values.
Backport for #60050Closes#32434
This commit adds a mechanism to MapperTestCase that allows implementing
test classes to check that their parameters can be updated, or throw conflict
errors as advertised. Child classes override the registerParameters method
and tell the passed-in UpdateChecker class about their parameters. Simple
conflicts can be checked, using the existing minimal mappings as a base to
compare against, or alternatively a particular initial mapping can be provided
to check edge cases (eg, norms can be updated from true to false, but not
vice versa). Updates are registered with a predicate that checks that the update
has in fact been applied to the resulting FieldMapper.
Fixes#61631
Most of our field types have the same implementation for their `existsQuery` method which relies on doc_values if present, otherwise it queries norms if available or uses a term query against the _field_names meta field. This standard implementation is repeated in many different mappers.
There are field types that only query doc_values, because they always have them, and field types that always query _field_names, because they never have norms nor doc_values. We could apply the same standard logic to all of these field types as `MappedFieldType` has the knowledge about what data structures are available.
This commit introduces a standard implementation that does the right thing depending on the data structure that is available. With that only field types that require a different behaviour need to override the existsQuery method.
At the same time, this no longer forces subclasses to override `existsQuery`, which could be forgotten when needed. To address this we introduced a new test method in `MapperTestCase` that verifies the `existsQuery` being generated and its consistency with the available data structures.
* Make for each processor resistant to field modification (#62791)
This change provides consistent view of field that foreach processor is iterating over. That prevents it to go into infinite loop and put great pressure on the cluster.
Closes#62790
* fix compilation
This reworks the code around grok's built-in patterns to name things
more like the rest of the code. Its not a big deal, but I'm just more
used to having `public static final` constants in SHOUTING_SNAKE_CASE.
The dense vector field is not aggregatable although it produces fielddata through its BinaryDocValuesField. It should pass up hasDocValues set to true to its parent class in its constructor, and return isAggregatable false. Same for the sparse vector field (only in 7.x).
This may not have consequences today, but it will be important once we try to share the same exists query implementation throughout all of the mappers with #57607.
Currently we log the NettyAllocator description when the netty plugin is
created. Unfortunately, this hits certain static fields in Netty which
triggers the settings of the number of CPU processors. This conflicts
with out Elasticsearch behavior to override this based on a setting.
This commit resolves the issue by logging after the processors have been
set.
Currently the netty pool chunk size defaults to 16MB. The number does
not play well with the G1GC which causes this to consume entire regions.
Additionally, we normally allocated arrays of size 64KB or less. This
means that Elasticsearch could handle a smaller pool chunk size to play
nicer with the G1GC.
Backports #61590 to 7.x
So far we don't allow metadata fields in the document _source. However, in the case of the _doc_count field mapper (#58339) we want to be able to set
This PR adds a method to the metadata field parsers that exposes if the field can be included in the document source or not.
This way each metadata field can configure if it can be included in the document _source
FetchSubPhase#getProcessor currently takes a SearchLookup parameter. This
however is only needed by a couple of subphases, and will almost certainly change in
future as we want to simplify how fetch phases retrieve values for individual hits.
To future-proof against further signature changes, this commit moves the SearchLookup
reference into FetchContext instead.
We currently pass a SearchContext around to share configuration among
FetchSubPhases. With the introduction of runtime fields, it would be useful
to start storing some state on this context to be shared between different
subphases (for example, stored fields or search lookups can be loaded lazily
but referred to by many different subphases). However, SearchContext is a
very large and unwieldy class, and adding more methods or state here feels
like a bridge too far.
This commit introduces a new FetchContext class that exposes only those
methods on SearchContext that are required for fetch phases. This reduces
the API surface area for fetch phases considerably, and should give us some
leeway to add further state.
This new snapshot contains the following JIRAs that we're interested in:
- [LUCENE-9525](https://issues.apache.org/jira/browse/LUCENE-9525)
Better handling of small documents. This should improve retrieval times
when documents are less than ~1kB.
- [LUCENE-9510](https://issues.apache.org/jira/browse/LUCENE-9510)
Faster flushes when index sorting is enabled by not compressing the
temporary files that store stored fields and term vectors.
This backport incorporates all the changes to improve compiler extensibility. The reason for this
backport is the changes are now required to support runtime fields.
This implements the `fields` API in `_search` for runtime fields using
doc values. Most of that implementation is stolen from the
`docvalue_fields` fetch sub-phase, just moved into the same API that the
`fields` API uses. At this point the `docvalue_fields` fetch phase looks
like a special case of the `fields` API.
While I was at it I moved the "which doc values sub-implementation
should I use for fetching?" question from a bunch of `instanceof`s to a
method on `LeafFieldData` so we can be much more flexible with what is
returned and we're not forced to extend certain classes just to make the
fetch phase happy.
Relates to #59332
FastVectorHighlighter uses the top-level reader to rewrite queries against, which
it gets via an IndexSearcher field on HitContext. However, we can already access
this top-level reader via HitContext's existing LeafReaderContext field.
This commit removes the unnecessary field and constructor parameter, and
changes the implementation of topLevelReader to go via ReaderUtils and
the leaf reader context.
This PR adds support for the 'fields' option in the following places:
* Anytime `inner_hits` is used, for both fetching nested/ child docs and field collapsing
* The `top_hits` aggregation
Addresses #61949.
Today some uncaught shard failures such as RejectedExecutionException skips the release of shard context
and let subsequent scroll requests access the same shard context again. Depending on how the other shards advanced,
this behavior can lead to missing data since scrolls always move forward.
In order to avoid hidden data loss, this commit ensures that we always release the context of shard search scroll requests whenever a failure
occurs locally. The shard search context will no longer exist in subsequent scroll requests which will lead to consistent shard failures
in the responses.
This change also modifies the retry tests of the reindex feature. Reindex retries scroll search request that contains a shard failure and
move on whenever the failure disappears. That is not compatible with how scrolls work and can lead to missing data as explained above.
That means that reindex will now report scroll failures when search rejection happen during the operation instead of skipping document
silently.
Finally this change removes an old TODO that was fulfilled with #61062.
This commit introduces a new API that manages point-in-times in x-pack
basic. Elasticsearch pit (point in time) is a lightweight view into the
state of the data as it existed when initiated. A search request by
default executes against the most recent point in time. In some cases,
it is preferred to perform multiple search requests using the same point
in time. For example, if refreshes happen between search_after requests,
then the results of those requests might not be consistent as changes
happening between searches are only visible to the more recent point in
time.
A point in time must be opened before being used in search requests. The
`keep_alive` parameter tells Elasticsearch how long it should keep a
point in time around.
```
POST /my_index/_pit?keep_alive=1m
```
The response from the above request includes a `id`, which should be
passed to the `id` of the `pit` parameter of search requests.
```
POST /_search
{
"query": {
"match" : {
"title" : "elasticsearch"
}
},
"pit": {
"id": "46ToAwMDaWR4BXV1aWQxAgZub2RlXzEAAAAAAAAAAAEBYQNpZHkFdXVpZDIrBm5vZGVfMwAAAAAAAAAAKgFjA2lkeQV1dWlkMioGbm9kZV8yAAAAAAAAAAAMAWICBXV1aWQyAAAFdXVpZDEAAQltYXRjaF9hbGw_gAAAAA==",
"keep_alive": "1m"
}
}
```
Point-in-times are automatically closed when the `keep_alive` is
elapsed. However, keeping point-in-times has a cost; hence,
point-in-times should be closed as soon as they are no longer used in
search requests.
```
DELETE /_pit
{
"id" : "46ToAwMDaWR4BXV1aWQxAgZub2RlXzEAAAAAAAAAAAEBYQNpZHkFdXVpZDIrBm5vZGVfMwAAAAAAAAAAKgFjA2lkeQV1dWlkMioGbm9kZV8yAAAAAAAAAAAMAWIBBXV1aWQyAAA="
}
```
#### Notable works in this change:
- Move the search state to the coordinating node: #52741
- Allow searches with a specific reader context: #53989
- Add the ability to acquire readers in IndexShard: #54966
Relates #46523
Relates #26472
Co-authored-by: Jim Ferenczi <jimczi@apache.org>
This commit removes `integTest` task from all es-plugins.
Most relevant projects have been converted to use yamlRestTest, javaRestTest,
or internalClusterTest in prior PRs.
A few projects needed to be adjusted to allow complete removal of this task
* x-pack/plugin - converted to use yamlRestTest and javaRestTest
* plugins/repository-hdfs - kept the integTest task, but use `rest-test` plugin to define the task
* qa/die-with-dignity - convert to javaRestTest
* x-pack/qa/security-example-spi-extension - convert to javaRestTest
* multiple projects - remove the integTest.enabled = false (yay!)
related: #61802
related: #60630
related: #59444
related: #59089
related: #56841
related: #59939
related: #55896
This also adds the ability to define a serialization check on Parameters, used
in this case to only serialize format and locale parameters if the mapper is a
date range.
There are currently half a dozen ways to add plugins and modules for
test clusters to use. All of them require the calling project to peek
into the plugin or module they want to use to grab its bundlePlugin
task, and then both depend on that task, as well as extract the archive
path the task will produce. This creates cross project dependencies that
are difficult to detect, and if the dependent plugin/module has not yet
been configured, the build will fail because the task does not yet
exist.
This commit makes the plugin and module methods for testclusters
symmetetric, and simply adding a file provider directly, or a project
path that will produce the plugin/module zip. Internally this new
variant uses normal configuration/dependencies across projects to get
the zip artifact. It also has the added benefit of no longer needing the
caller to add to the test task a dependsOn for bundlePlugin task.
FetchSubPhase has two 'execute' methods, one which takes all hits to be examined,
and one which takes a single HitContext. It's not obvious which one should be implemented
by a given sub-phase, or if implementing both is a possibility; nor is it obvious that we first
run the hitExecute methods of all subphases, and then subsequently call all the
hitsExecute methods.
This commit reworks FetchSubPhase to replace these two variants with a processor class,
`FetchSubPhaseProcessor`, that is returned from a single `getProcessor` method. This
processor class has two methods, `setNextReader()` and `process`. FetchPhase collects
processors from all its subphases (if a subphase does not need to execute on the current
search context, it can return `null` from `getProcessor`). It then sorts its hits by docid, and
groups them by lucene leaf reader. For each reader group, it calls `setNextReader()` on
all non-null processors, and then passes each doc id to `process()`.
Implementations of fetch sub phases can divide their concerns into per-request, per-reader
and per-document sections, and no longer need to worry about sorting docs or dealing with
reader slices.
FetchSubPhase now provides a FetchSubPhaseExecutor that exposes two methods,
setNextReader(LeafReaderContext) and execute(HitContext). The parent FetchPhase collects all
these executors together (if a phase should not be executed, then it returns null here); then
it sorts hits, and groups them by reader; for each reader it calls setNextReader, and then
execute for each hit in turn. Individual sub phases no longer need to concern themselves with
sorting docs or keeping track of readers; global structures can be built in
getExecutor(SearchContext), per-reader structures in setNextReader and per-doc in execute.
This commit adds a test to MapperTestCase that explicitly checks that a mapper can
serialize all its default values, and that this serialization can then be re-parsed. Note that
the test is disabled for non-parametrized mappers as their serialization may in some cases
output parameters that are not accepted. Gradually moving all mappers to parametrized
form will address this.
The commit also contains a fix to keyword mappers, which were not correctly serializing
the similarity parameter; this partially addresses #61563. It also enables `null` as a
value for `null_value` on `scaled_float`, as a follow-up to #61798
We frequently use `long`s with `BitArray` in aggs and right now we have
to assert that the `long` fits in an `int`. This adds support for `long`
to `BitArray` so we don't need those assertions.
This commit enhances the verbose output for the
`_ingest/pipeline/_simulate?verbose` api. Specifically
this adds the following:
* the pipeline processor is now included in the output
* the conditional (if) and result is now included in the output iff it was defined
* a status field is always displayed. the possible values of status are
* `success` - if the processor ran with out errors
* `error` - if the processor ran but threw an error that was not ingored
* `error_ignored` - if the processor ran but threw an error that was ingored
* `skipped` - if the process did not run (currently only possible if the if condition evaluates to false)
* `dropped` - if the the `drop` processor ran and dropped the document
* a `processor_type` field for the type of processor (e.g. set, rename, etc.)
* throw a better error if trying to simulate with a pipeline that does not exist
closes#56004
Runtime fields need to have a SearchLookup available, when building their fielddata implementations, so that they can look up other fields, runtime or not.
To achieve that, we add a Supplier<SearchLookup> argument to the existing MappedFieldType#fielddataBuilder method.
As we introduce the ability to look up other fields while building fielddata for mapped fields, we implicitly add the ability for a field to require other fields. This requires some protection mechanism that detects dependency cycles to prevent stack overflow errors.
With this commit we also introduce detection for cycles, as well as a limit on the depth of the references for a runtime field. Note that we also plan on introducing cycles detection at compile time, so the runtime cycles detection is a last resort to prevent stack overflow errors but we hope that we can reject runtime fields from being registered in the mappings when they create a cycle in their definition.
Note that this commit does not introduce any production implementation of runtime fields, but is rather a pre-requisite to merge the runtime fields feature branch.
This is a breaking change for MapperPlugins that plug in a mapper, as the signature of MappedFieldType#fielddataBuilder changes from taking a single argument (the index name), to also accept a Supplier<SearchLookup>.
Relates to #59332
Co-authored-by: Nik Everett <nik9000@gmail.com>
This commit removes the tasks module that only existed to define the
tasks result index, `.tasks`, as a system index. The definition for
the tasks results system index descriptor is moved to the
`SystemIndices` class with a check that no other plugin or module
attempts to define an entry with the same source.
Additionally, this change also makes the pattern for the tasks result
index a wildcard pattern since we will need this when the index is
upgraded (reindex to new name and then alias that to .tasks).
Backport of #61540
DeprecationLogger's constructor should not create two loggers. It was
taking parent logger instance, changing its name with a .deprecation
prefix and creating a new logger.
Most of the time parent logger was not needed. It was causing Log4j to
unnecessarily cache the unused parent logger instance.
depends on #61515
backports #58435
Splitting DeprecationLogger into two. HeaderWarningLogger - responsible for adding a response warning headers and ThrottlingLogger - responsible for limiting the duplicated log entries for the same key (previously deprecateAndMaybeLog).
Introducing A ThrottlingAndHeaderWarningLogger which is a base for other common logging usages where both response warning header and logging throttling was needed.
relates #55699
relates #52369
backports #55941
Before when a value was copied to a field through a parent field or `copy_to`,
we parsed it using the `FieldMapper` from the source field. Instead we should
parse it using the target `FieldMapper`. This ensures that we apply the
appropriate mapping type and options to the copied value.
To implement the fix cleanly, this PR refactors the value parsing strategy. Now
instead of looking up values directly, field mappers produce a helper object
`ValueFetcher`. The value fetchers are responsible for almost all aspects of
fetching, including looking up the right paths in the _source.
The PR is fairly big but each commit can be reviewed individually.
Fixes#61033.
In addition, this commit converts ScaledFloatFieldMapper as it was relying
on a number of static values taken from NumberFieldMapper that had changed
or been removed.
This switches a few tests for field mappers from `ESSingleNodeTestCase`
to `ESTestCase` because, in general, we prefer to avoid
`ESSingleNodeTestCase` when we can because it is slow and "big". "Big"
here means that it pulls in an entire node, making it difficult to
reason about what you are testing.
This makes KeywordFieldMapper extend ParametrizedFieldMapper, with explicitly
defined parameters.
In addition, we add a new option to Parameter, restrictedStringParam, which
accepts a restricted set of string options.