This commit removes a TODO in the IndexNameExpressionResolver that
indicated the API should use a Set instead of a List. However, this
TODO was not completely correct since the ordering of arguments matters
due to negations when evaluating wildcards and since we also allow
a list of patterns like `*,-foo,*`, which would have a different
meaning even when using a Set with insertion ordering.
Relates #52788
Backport of #52963
IDEs can sometimes run annotation processors that leave files in
`src/main/generated/**/*.java`, causing Spotless to complain. Even
though this path ought not to exist, exclude it anyway in order to avoid
spurious failures.
Since version 8.4, `MMapDirectory` has an optimization to read long[]
arrays directly in little endian order, which postings leverage. So it'd
be more efficient to open postings with `MMapDirectory`.
I refactored a bit the existing logic to better explain why every listed
file extension is open with `mmap`.
Add `:qa:os` and `:benchmarks` to the list of automatically formatted
projects, and apply some manual fix-ups to polish it up.
In particular, I noticed that `Files.write(...)` when passed a list will
automaticaly apply a UTF-8 encoding and write a newline after each line,
making it easier to use than FileUtils.append. It's even available from
1.8.
Also, in the Allocators class, a number of methods declared thrown exceptions that IntelliJ reported were never thrown, and as far as I could see this is true, so I removed the exceptions.
* Move In, InPipe and InProcessor out of SQL to the common QL project.
* Move tests classes to the QL project.
* Create SQL dedicated In class to handle SQL specific data types.
* Update SQL classes to use the InPipe and InProcessor QL classes.
* Extract common Foldables methods in QL project.
* Be more explicit when folding and converting a foldable value, by
removing most of the code inside Foldables class.
(cherry picked from commit 7425042f86f66df8c207c5e96f9b9848bda2b4c3)
Backport of #51233 to the seven dot x branch.
Tries to load a `Mapper` instance for the mapping snippet of a dynamic template.
This should catch things like using an analyzer that is undefined or mapping attributes that are unused.
This is best effort:
* If `{{name}}` placeholder is used in the mapping snippet then validation is skipped.
* If `match_mapping_type` is not specified then validation is performed for all mapping types.
If parsing succeeds with a single mapping type then this the dynamic mapping is considered valid.
If is detected that a dynamic template mapping snippet is invalid at mapping update time then the mapping update is failed for indices created on 8.0.0-alpha1 and later. For indices created on prior version a deprecation warning is omitted instead. In 7.x clusters the mapping update will never fail in case of an invalid dynamic template mapping snippet and a deprecation warning will always be omitted.
Closes#17411Closes#24419
Co-authored-by: Adrien Grand <jpountz@gmail.com>
When user A runs as user B and performs any API key related operations,
user B's realm should always be used to associate with the API key.
Currently user A's realm is used when getting or invalidating API keys
and owner=true. The PR is to fix this bug.
resolves: #51975
The `terms` aggregation can be sortd by the results of its
sub-aggregations. Because it uses that sorting for filtering to the
top-n it tries not to construct all of the buckets for the child
aggregations. This has its own interesting problem around reduction, but
they aren't super relevant to this change. This change moves that
optimization from the `TermsAggregator` and into the aggregators being
sorted on. This should make it more clear what is going on and it
unifies this optimization with validating the sort.
Finally, this should enable some minor optimizations to save a few
comparisons when sorting multi-valued buckets. I'll get those in a
follow up because they are now *fairly* obvious. They probably won't be
a huge performance improvement, but it'll be nice anyway.
We mades roles pluggable, but never updated the client to account for
this. This means that when speaking to a modern cluster, application
logs are spammed with warning messages around unrecognized roles. This
commit addresses this by accounting for the fact that roles can extend
beyond master/data/ingest now.
* [ML][Inference] Add support for multi-value leaves to the tree model (#52531)
This adds support for multi-value leaves. This is a prerequisite for multi-class boosted tree classification.
Update documentation for:
* restResources config (related #52114)
* call out YAML vs. Java based Rest tests
* update example to use newer syntax
* update example to target a test that is not skipped
* provide example for bwcRest test (related #52383)
A recent PR #52114 introduced two new tasks to copy the REST api and tests.
A couple bugs were found in that initial PR that prevents the incremental
build from working as expected.
The pattern match of empty string is equivalent to match all and it was coded
as match none. Fixed with explicit checks against empty patterns.
The fileCollection.plus return value was ignored. Fixed by changing how the
input's fileTree is constructed.
If a project has an src/test/resources directory, and tests are being copied
without a rest-api-spec/test directory could result no-op. Masked by the other
bugs and fixed by minor changes to logic to determine if a project has tests.
This adds a new configurable field called `indices_options`. This allows users to create or update the indices_options used when a datafeed reads from an index.
This is necessary for the following use cases:
- Reading from frozen indices
- Allowing certain indices in multiple index patterns to not exist yet
These index options are available on datafeed creation and update. Users may specify them as URL parameters or within the configuration object.
closes https://github.com/elastic/elasticsearch/issues/48056
This change adds the recall@k metric and refactors precision@k to match
the new metric.
Recall@k is an important metric to use for learning to rank (LTR)
use-cases. Candidate generation or first ranking phase ranking functions
are often optimized for high recall, in order to generate as many
relevant candidates in the top-k as possible for a second phase of
ranking. Adding this metric allows tuning that base query for LTR.
See: https://github.com/elastic/elasticsearch/issues/51676
Backports: https://github.com/elastic/elasticsearch/pull/52577
Add index name(s) into the source for the cluster state update done when putting mapping.
This ensures that the pending tasks API includes information on source indices.
Computing the stats for completion fields may involve a significant amount of
work since it walks every field of every segment looking for completion fields.
Innocuous-looking APIs like `GET _stats` or `GET _cluster/stats` do this for
every shard in the cluster. This repeated work is unnecessary since these stats
do not change between refreshes; in many indices they remain constant for a
long time.
This commit introduces a cache for these stats which is invalidated on a
refresh, allowing most stats calls to bypass the work needed to compute them on
most shards.
Closes#51915
Backport of #51991
Add query execution and return actual results returned from
Elasticsearch inside the tests
(cherry picked from commit 3e039282bf991af87604a6d4f8eada19d5e33842)
The introductory sections of the reference manual contains some simplified
instructions for adding a node to the cluster. Unfortunately they are a little
too simplified and only really work for clusters running on `localhost`. If you
try and follow these instructions for a distributed cluster then the new node
will, confusingly, auto-bootstrap itself into a distinct one-node cluster.
Multiple nodes running on localhost is a valid config, of course, but we should
spell out that these instructions are really only for experimentation and that
it takes a bit more work to add nodes to a distributed cluster. This commit
does so.
Also, the "important config" instructions for discovery say that you MUST set
`discovery.seed_hosts` whereas in fact it is fine to ignore this setting and
use a dynamic discovery mechanism instead. This commit weakens this statement
and links to the docs for dynamic discovery mechanisms.
Finally, this section is also overloaded with some technical details that are
not important for this context and are adequately covered elsewhere, and
completely fails to note that the default discovery port is 9300. This commit
addresses this.
This change removes TrainedModelConfig#isAvailableWithLicense method with calls to
XPackLicenseState#isAllowedByLicense.
Please note there are subtle changes to the code logic. But they are the right changes:
* Instead of Platinum license, Enterprise license nows guarantees availability.
* No explicit check when the license requirement is basic. Since basic license is always available, this check is unnecessary.
* Trial license is always allowed.
We aren't able to reproduce or figure out the reason that failed this test.
This commit adds more assertions so we can narrow the scope.
Relates #52223
Since #51905, we use the local checkpoint of the safe commit to
calculate the number of uncommitted operations of a translog stats. If a
periodic flush triggered by afterWriteOperation completes before we sync
translog, then the last commit is not safe. We also need to sync
translog from Engine instead of the translog so that we can advance the
safe commit.
Relates #51905Closes#52223
Since #51905, we skip translog recovery if the local checkpoint of the
safe commit equals to the global checkpoint. This change adjusts the
test not to create a new snapshot in that case.
Closes#52221
Relates #51905
Separates the translog from the index deletion conditions (allowing the translog to be cleaned
up more eagerly), and avoids taking the write lock on the translog if no clean-up is actually
necessary.
Today we use the translog_generation of the safe commit as the minimum
required translog generation for recovery. This approach has a
limitation, where we won't be able to clean up translog unless we flush.
Reopening an already recovered engine will create a new empty translog,
and we leave it there until we force flush.
This commit removes the translog_generation commit tag and uses the
local checkpoint of the safe commit to calculate the minimum required
translog generation for recovery instead.
Closes#49970
This PR moves the majority of the Watcher REST tests under
the Watcher x-pack plugin.
Specifically, moves the Watcher tests from:
x-pack/plugin/test
x-pack/qa/smoke-test-watcher
x-pack/qa/smoke-test-watcher-with-security
x-pack/qa/smoke-test-monitoring-with-watcher
to:
x-pack/plugin/watcher/qa/rest (/test and /qa/smoke-test-watcher)
x-pack/plugin/watcher/qa/with-security
x-pack/plugin/watcher/qa/with-monitoring
Additionally, this disables Watcher from the main
x-pack test cluster and consolidates the stop/start logic
for the tests listed.
No changes to the tests (beyond moving them) are included.
3rd party tests and doc tests (which also touch Watcher)
are not included in the changes here.
We embed the :reaper project jar in the build-tools jar so we can spawn
a reaper process at build runtime. Due to this, the jar technically
isn't part of the test runtime classpath, but for input snapshotting
purposes, we should be treating it as such. Instead, because it lives
in META-INF, Gradle treats it as a normal file, which in practice means
its hash changes on every build (timestamps, etc).
This commit changes our input snapshotting strategy such that instead
we explicitly add the jar as an input to any test tasks using Gradle's
runtime classpath normalization strategy (ignore timestamps, jar entry
order, etc) and ignore the file in META-INF. This ensures that we can
properly cache test results for build-tools, why still ensuring that
changes to the :reaper project trigger reexecution of tests.