Currently, if an incorrectly formatted date is passed as a null_value for a date field mapper
configuration, you get a vague error:
Failed to parse mapping [_doc]: cannot parse empty date
Similarly, if you pass an incorrect format, you get the error:
Failed to parse mapping [_doc]: Invalid format [...]
This commit improves both these errors by including the mapper name and parameter that
are misconfigured.
Fixes#61712
During prewarming of a Lucene file a CacheFile is acquired and
then locked for the duration of the prewarming, ie locked until all
the part of the file has been downloaded and written to cache on
disk. The locking (executed with CacheFile#fileLock()) is here to
prevent the cache file to be evicted while it is prewarming.
But holding the lock may take a while for large files, specially since
restoring snapshot files now respects the
indices.recovery.max_bytes_per_sec setting of 40mb (#58658),
and this can have bad consequences like preventing the CacheFile
to be evicted, opened or closed. In manual tests this bug slow
downs various requests like mounting a new searchable snapshot
index or deleting an existing one that is still prewarming.
This commit reduces the time the lock is held during prewarming so
that the read lock is only required when actively writing to the CacheFile.
* [ML] adds new n_gram_encoding custom processor (#61578)
This adds a new `n_gram_encoding` feature processor for analytics and inference.
The focus of this processor is simple ngram encodings that allow:
- multiple ngrams [1..5]
- Prefix, infix, suffix
As we figured out in
https://github.com/elastic/elasticsearch/issues/61316#issuecomment-685482708
Azul brings back a lot of changes from JDK 11 to their Zulu8 build
and this means that we can't run this with SunJSSE in FIPS 140 mode.
This change ensures that we configure Zulu8 JDK JVMs in FIPS 140
mode, using the bouncy castle JSSE FIPS provider, instead of the
SunJSSE one ( as we do for the rest of the java 8 JVMs )
Resolves: #61316
Previously, we added a copy of the `_id` during reindexing and sorted
the destination index on that. This allowed us to traverse the docs in the
destination index in a stable order multiple times and with efficiency.
However, the destination index being sorted means we cannot have `nested`
typed fields. This is a problem as it does not allow us to provide
a good experience with our evaluate API when it comes to computing
metrics for specific classes, features, etc.
This commit changes the approach in order to result to a destination
index that allows nested fields.
Instead of adding a copy of the `_id` field, we now add an incremental
id that we can use to traverse the docs in a stable order. We also
ensure we always assign the same incremental id to the same doc from
the source indices by sorting on `_seq_no` during reindexing. That
in combination with the reindexing API using scroll gives us a stable
order as scroll uses the (`_index`, `_doc`, shard_id) tuple to resolve ties.
The extractor now does not need to scroll. Instead we sort on the incremental
id and we do ranged searches to avoid the sort-all-docs overhead.
Finally, the `TestDocsIterator` is simply changed to search_after the incremental id.
With these changes data frame analytics jobs do not use scroll at any part.
Having all these in place, the commit adds the `nested` types to the necessary
fields of `classification` and `regression` analyses results.
Backport of #61943
When a user authenticates via OpenID Connect we copy information from
the OIDC claims into the user's metadata in a particular format.
This commit adds a test that metadata in that format can be used in a
mustache template for Document Level Security.
Backport of: #60030
A role mapping with the following content:
"rules": { "field": { "userid" : "admin" } }
will never match because `userid` is not a valid field. The correct
field is `username`.
This change adds DEBUG logging when an undefined field is referenced.
The choice to use DEBUG rather than INFO/WARN is that the set of
fields is partially dynamic (e.g. the `metadata.*` fields), so
it may be perfectly reasonable to check a field that is not defined
for that user. For example this rule:
"rules": { "field": { "metadata.ranking" : "A" } }
would generate a log message for an unranked user, which would
erroneously suggest that such a rule is an error.
This DEBUG logging will assist in diagnosing problems, without
introducing that confusion.
Backport of: #61246
Co-authored-by: Elastic Machine <elasticmachine@users.noreply.github.com>
There are currently half a dozen ways to add plugins and modules for
test clusters to use. All of them require the calling project to peek
into the plugin or module they want to use to grab its bundlePlugin
task, and then both depend on that task, as well as extract the archive
path the task will produce. This creates cross project dependencies that
are difficult to detect, and if the dependent plugin/module has not yet
been configured, the build will fail because the task does not yet
exist.
This commit makes the plugin and module methods for testclusters
symmetetric, and simply adding a file provider directly, or a project
path that will produce the plugin/module zip. Internally this new
variant uses normal configuration/dependencies across projects to get
the zip artifact. It also has the added benefit of no longer needing the
caller to add to the test task a dependsOn for bundlePlugin task.
The main changes are:
* Fix custom params are missing when using template or script in watcher's
logging action or jira action.
* Add yaml tests to test passing params to template or script successfully.
Relates to #57625
Co-authored-by: bellengao <gbl_long@163.com>
The current implementation of the filter pipe is incomplete hence why
it got reverted. Note this is not a complete revert as some of the
improvements of said commit (such as the PostAnalyzer) are useful in
general.
Relates #61805
(cherry picked from commit 7a7eb66f7d39586c3a3bc00dce49e6c47a23b46a)
This replaces a specialized bit set implementation used in cardinality
with our standard `BitArray` which works exactly the same way. Its also
tracked by `BigArrays` which is great!
BytesRefHashTests and LongObjectHashMapTests currently extend ESSingleNodeTestCase,
which builds an entire node just to run some unit tests over entirely in-memory data
structures. This commit converts them both to extend ESTestCase.
Backport of #61904 to 7.x branch.
The eql search api redirects to the search api. For this reason the eql
search api could work with concrete data stream names. However if security
is enabled and a data stream name snippet with a wildcard was used then
it could not resolve this expressions. This is because the EqlSearchRequest
class didn't overwrite the `includeDataStreams()` method. This pr fixes this,
so that the security layer can properly expand data stream name wildcard
expressions for the eql search api.
This commit also moves the eql data stream test to xpack rest tests,
so that the test runs with security enabled. This is required to reproduce
the bug.
Closes#60828
FetchSubPhase has two 'execute' methods, one which takes all hits to be examined,
and one which takes a single HitContext. It's not obvious which one should be implemented
by a given sub-phase, or if implementing both is a possibility; nor is it obvious that we first
run the hitExecute methods of all subphases, and then subsequently call all the
hitsExecute methods.
This commit reworks FetchSubPhase to replace these two variants with a processor class,
`FetchSubPhaseProcessor`, that is returned from a single `getProcessor` method. This
processor class has two methods, `setNextReader()` and `process`. FetchPhase collects
processors from all its subphases (if a subphase does not need to execute on the current
search context, it can return `null` from `getProcessor`). It then sorts its hits by docid, and
groups them by lucene leaf reader. For each reader group, it calls `setNextReader()` on
all non-null processors, and then passes each doc id to `process()`.
Implementations of fetch sub phases can divide their concerns into per-request, per-reader
and per-document sections, and no longer need to worry about sorting docs or dealing with
reader slices.
FetchSubPhase now provides a FetchSubPhaseExecutor that exposes two methods,
setNextReader(LeafReaderContext) and execute(HitContext). The parent FetchPhase collects all
these executors together (if a phase should not be executed, then it returns null here); then
it sorts hits, and groups them by reader; for each reader it calls setNextReader, and then
execute for each hit in turn. Individual sub phases no longer need to concern themselves with
sorting docs or keeping track of readers; global structures can be built in
getExecutor(SearchContext), per-reader structures in setNextReader and per-doc in execute.
This commit adds a test to MapperTestCase that explicitly checks that a mapper can
serialize all its default values, and that this serialization can then be re-parsed. Note that
the test is disabled for non-parametrized mappers as their serialization may in some cases
output parameters that are not accepted. Gradually moving all mappers to parametrized
form will address this.
The commit also contains a fix to keyword mappers, which were not correctly serializing
the similarity parameter; this partially addresses #61563. It also enables `null` as a
value for `null_value` on `scaled_float`, as a follow-up to #61798
We frequently use `long`s with `BitArray` in aggs and right now we have
to assert that the `long` fits in an `int`. This adds support for `long`
to `BitArray` so we don't need those assertions.
This fixes a bug introduced by #61782. In that PR I thought I could
simplify the persistence of progress by using the progress straight
from the stats holder in the task instead of calling the get
stats action. However, I overlooked that it is then possible to
have stale progress for the reindexing task as that is only updated
when the get stats API is called.
In this commit this is fixed by updating reindexing task progress
before persisting the job progress. This seems to be much more
lightweight than calling the get stats request.
Closes#61852
Backport of #61868
Search could leak memory if global ordinals were calculated as part of
a search with low level cancellation enabled. QueryPhase registers a
cancellation on the reader that is never removed, which ends up being
referenced from the global ordinals cache entry. This keeps an indirect
reference to the search context. A significant leak can occur when a
heavy aggregation (cardinality for instance) is used and a failure occurs
during search, in particular if the pages backing the hyperlog++ structure
are not recycled when it is closed.
This commit also fixes an issue with an unclosed resource and request
breaker adjustment in the cardinality aggregation.
For 1/2 the plugins in x-pack, the integTest
task is now a no-op and all of the tests are now executed via a test,
yamlRestTest, javaRestTest, or internalClusterTest.
This includes the following projects:
security, spatial, stack, transform, vecotrs, voting-only-node, and watcher.
A few of the more specialized qa projects within these plugins
have not been changed with this PR due to additional complexity which should
be addressed separately.
related: #60630
related: #56841
related: #59939
related: #55896
For 1/2 the plugins in x-pack, the integTest
task is now a no-op and all of the tests are now executed via a test,
yamlRestTest, javaRestTest, or internalClusterTest.
This includes the following projects:
async-search, autoscaling, ccr, enrich, eql, frozen-indicies,
data-streams, graph, ilm, mapper-constant-keyword, mapper-flattened, ml
A few of the more specialized qa projects within these plugins
have not been changed with this PR due to additional complexity which should
be addressed separately.
A follow up PR will address the remaining x-pack plugins (this PR is big enough as-is).
related: #61802
related: #56841
related: #59939
related: #55896
While starting the data frame analytics process it is possible
to get an exception before the process crash handler is in place.
In addition, right after starting the process, we check the process
is alive to ensure we capture a failed process. However, those exceptions
are unhandled.
This commit catches any exception thrown while starting the process
and sets the task to failed with the root cause error message.
I have also taken the chance to remove some unused parameters
in `NativeAnalyticsProcessFactory`.
Relates #61704
Backport of #61838
Allow filtering through a pipe, across events and sequences.
Filter pipes are pushed down to base queries.
For now filtering after limit (head/tail) is forbidden as the
semantics are still up for debate.
Fix#59763
(cherry picked from commit 80569a388b76cecb5f55037fe989c8b6f140761b)
This commit generalizes how QueryPhaseResultConsumer is initialized.
The query phase always uses this consumer so it doesn't need to be hidden behind
an abstract class.
PR #61474 reworked deprecation logging to rely more heavily on log4j. Unfortunately,
the changes required to log4j's configuration were not applied to the version we ship
with the Docker image.
The ML mappings upgrade test had become useless as it was
checking a field that has been the same since 6.5. This
commit switches to a field that was changed in 7.9.
Additionally, the test only used to check the results index
mappings. This commit also adds checking for the config
index.
Backport of #61340
During a rolling upgrade it is possible that a worker node will be upgraded before
the master in which case the DFA templates will not have been installed.
Before a DFA task starts check that the latest template is installed and install it if necessary.
Several field mappers have a null_value parameter, that allows you to specify a placeholder
value to insert into a document if the incoming value for that field is null. The default value
for this is always null, meaning "add no placeholder". However, we explicitly bar users from
setting this parameter directly to null (done in #7978, in order to fix an NPE).
This exclusion means that if a mapper is serialized with include_defaults, then we either need
to special-case null_value to ensure that it is not output when it holds the default value, or
we find that the resulting serialized form cannot be used to create a mapping. This stops us
doing some useful generic testing of mappers.
This commit permits null as a parameter value for null_value, and changes the tests to check
that it is a) permissible and b) applied without throwing errors. As part of the testing changes,
a new base class MapperServiceTestCase is refactored from MapperTestCase, holding
the various helper methods related to building mappings but not the single-mapper specific
abstract methods.
Closes#58823
This removes the assertion that the header warnings we parse in the
rest client reponses conform to RFC 7234 because we are not in full control
of the warnings that could be present in the responses (ie. proxies might
emit warnings that don't comply).
We still maintain this assertion on the ES side (see `HeaderWarning#addWarning`)
for the warnings we emit.
(cherry picked from commit 1259a46cbe84d32e85cd1a7455012d177b809702)
Signed-off-by: Andrei Dan <andrei.dan@elastic.co>