We've had some discussions around the user experience when using runtime fields. Although we do plan on having multiple runtime fields implementation (e.g. grok, lookup etc.) which could be exposed as different field types, we decided to expose all runtime fields under the same `runtime` type. At the moment, the only implementation will be through scripts, hence a `script` must be specified. In the future, there will be other ways to generate values for runtime fields besides scripts.
This translates also to renaming the RuntimeScriptFieldMapper class to RuntimeFieldMapper .
Relates to #59332
The interface is never used as an abstraction - implementations are are called directly,
and most of them don't need to implement the preProcess method.
An important goal of the disk threshold decider is to ensure that nodes
use less disk space than the high watermark, and to take action if a
node ever exceeds this watermark. Today we do not have any
integration-style tests of this high-level behaviour. This commit
introduces a small test harness that can adjust the apparent size of the
disk and verify that the disk threshold decider moves shards around in
response.
Co-authored-by: Yannick Welsch <yannick@welsch.lu>
Just a few random things to optimize motivated by somewhat sub-standard performance
for large snapshot cluster states with many concurrent snapshots observed in production.
Today, the terms aggregation reduces multiple aggregations at once using a map
to group same buckets together. This operation can be costly since it requires
to lookup every bucket in a global map with no particular order.
This commit changes how term buckets are sorted by shards and partial reduces in
order to be able to reduce results using a merge-sort strategy.
For bwc, results are merged with the legacy code if any of the aggregations use
a different sort (if it was returned by a node in prior versions).
Relates #51857
The null_value parameter for date fields is always parsed using DateFormatter.parseMillis,
which is incorrect for nanosecond resolution fields. This commit changes the parsing logic
to always use DateFieldType.parse() to parse the null value.
Backport of #61998 to 7.x branch.
Moving the data stream yaml tests to xpack plugin module has the following benefits:
* The tests are ran both with security enabled (as part of xpack/plugin integTest)
and disabled (as part of xpack/plugin/data-stream/qa/rest integTest).
* and running the tests in mixed cluster qa environment.
This commit includes the work that has been done on the runtime fields feature branch until now. The high level tasks are listed in #59332. The tasks that have not yet been completed can be worked on after merging the feature branch.
We are adding a new x-pack plugin called runtime-fields that plugs in a custom mapper which allows to define runtime fields based on a script.
The changes included in this commit that were made outside of the x-pack/plugin/runtime-fields directory are minimal and revolve around 1) making the ScriptService available while parsing index mappings so that the scripts associated to runtime fields can be compiled 2) sharing code to manipulate ranges etc. as it can be reused in runtime fields.
Co-authored-by: Nik Everett <nik9000@gmail.com>
Flattening both streams into a single stream here saves a few objects and some indirection.
Also, removed the redundant `offset` field which added nothing but complexity by forcing the
incrementation of two counters on every read.
We have a couple of yaml tests that index documents under a 'test' type, while they could omit it. We do want to still test that specifying the type is still allowed in 7.x but we already have specific tests for that, and other tests should use the endpoint that don't require specifying a type.
This commit adds external test modules. These are modules meant for
external systems to test edge cases in elasticsearch, but only within
snapshots. They are not meant to be used in production, so protections
are also added from their accidental inclusion in release builds.
Note that this commit does not actually add any new modules, it only
adds the infrastructure for the new modules, under
`test/external-modules`.
Simplifies allocation for snapshot-backed shards by always making the recovery source "from snapshot" for those
snapshot-backed shards (instead of "recover from local or from empty store"). Also let's the balancer pick a node which
to allocate the snapshot-backed shard to (which takes number of shards on each node into account unlike the current
implementation which just picks whatever node we are allowed to allocate to, with no notion of "balancing" at all).
Currently, if an incorrectly formatted date is passed as a null_value for a date field mapper
configuration, you get a vague error:
Failed to parse mapping [_doc]: cannot parse empty date
Similarly, if you pass an incorrect format, you get the error:
Failed to parse mapping [_doc]: Invalid format [...]
This commit improves both these errors by including the mapper name and parameter that
are misconfigured.
Fixes#61712
During prewarming of a Lucene file a CacheFile is acquired and
then locked for the duration of the prewarming, ie locked until all
the part of the file has been downloaded and written to cache on
disk. The locking (executed with CacheFile#fileLock()) is here to
prevent the cache file to be evicted while it is prewarming.
But holding the lock may take a while for large files, specially since
restoring snapshot files now respects the
indices.recovery.max_bytes_per_sec setting of 40mb (#58658),
and this can have bad consequences like preventing the CacheFile
to be evicted, opened or closed. In manual tests this bug slow
downs various requests like mounting a new searchable snapshot
index or deleting an existing one that is still prewarming.
This commit reduces the time the lock is held during prewarming so
that the read lock is only required when actively writing to the CacheFile.
* [ML] adds new n_gram_encoding custom processor (#61578)
This adds a new `n_gram_encoding` feature processor for analytics and inference.
The focus of this processor is simple ngram encodings that allow:
- multiple ngrams [1..5]
- Prefix, infix, suffix
As we figured out in
https://github.com/elastic/elasticsearch/issues/61316#issuecomment-685482708
Azul brings back a lot of changes from JDK 11 to their Zulu8 build
and this means that we can't run this with SunJSSE in FIPS 140 mode.
This change ensures that we configure Zulu8 JDK JVMs in FIPS 140
mode, using the bouncy castle JSSE FIPS provider, instead of the
SunJSSE one ( as we do for the rest of the java 8 JVMs )
Resolves: #61316
Previously, we added a copy of the `_id` during reindexing and sorted
the destination index on that. This allowed us to traverse the docs in the
destination index in a stable order multiple times and with efficiency.
However, the destination index being sorted means we cannot have `nested`
typed fields. This is a problem as it does not allow us to provide
a good experience with our evaluate API when it comes to computing
metrics for specific classes, features, etc.
This commit changes the approach in order to result to a destination
index that allows nested fields.
Instead of adding a copy of the `_id` field, we now add an incremental
id that we can use to traverse the docs in a stable order. We also
ensure we always assign the same incremental id to the same doc from
the source indices by sorting on `_seq_no` during reindexing. That
in combination with the reindexing API using scroll gives us a stable
order as scroll uses the (`_index`, `_doc`, shard_id) tuple to resolve ties.
The extractor now does not need to scroll. Instead we sort on the incremental
id and we do ranged searches to avoid the sort-all-docs overhead.
Finally, the `TestDocsIterator` is simply changed to search_after the incremental id.
With these changes data frame analytics jobs do not use scroll at any part.
Having all these in place, the commit adds the `nested` types to the necessary
fields of `classification` and `regression` analyses results.
Backport of #61943
When a user authenticates via OpenID Connect we copy information from
the OIDC claims into the user's metadata in a particular format.
This commit adds a test that metadata in that format can be used in a
mustache template for Document Level Security.
Backport of: #60030
A role mapping with the following content:
"rules": { "field": { "userid" : "admin" } }
will never match because `userid` is not a valid field. The correct
field is `username`.
This change adds DEBUG logging when an undefined field is referenced.
The choice to use DEBUG rather than INFO/WARN is that the set of
fields is partially dynamic (e.g. the `metadata.*` fields), so
it may be perfectly reasonable to check a field that is not defined
for that user. For example this rule:
"rules": { "field": { "metadata.ranking" : "A" } }
would generate a log message for an unranked user, which would
erroneously suggest that such a rule is an error.
This DEBUG logging will assist in diagnosing problems, without
introducing that confusion.
Backport of: #61246
Co-authored-by: Elastic Machine <elasticmachine@users.noreply.github.com>
There are currently half a dozen ways to add plugins and modules for
test clusters to use. All of them require the calling project to peek
into the plugin or module they want to use to grab its bundlePlugin
task, and then both depend on that task, as well as extract the archive
path the task will produce. This creates cross project dependencies that
are difficult to detect, and if the dependent plugin/module has not yet
been configured, the build will fail because the task does not yet
exist.
This commit makes the plugin and module methods for testclusters
symmetetric, and simply adding a file provider directly, or a project
path that will produce the plugin/module zip. Internally this new
variant uses normal configuration/dependencies across projects to get
the zip artifact. It also has the added benefit of no longer needing the
caller to add to the test task a dependsOn for bundlePlugin task.
The main changes are:
* Fix custom params are missing when using template or script in watcher's
logging action or jira action.
* Add yaml tests to test passing params to template or script successfully.
Relates to #57625
Co-authored-by: bellengao <gbl_long@163.com>
The current implementation of the filter pipe is incomplete hence why
it got reverted. Note this is not a complete revert as some of the
improvements of said commit (such as the PostAnalyzer) are useful in
general.
Relates #61805
(cherry picked from commit 7a7eb66f7d39586c3a3bc00dce49e6c47a23b46a)
This replaces a specialized bit set implementation used in cardinality
with our standard `BitArray` which works exactly the same way. Its also
tracked by `BigArrays` which is great!