A JSON schema was recently introduced for the REST API specification. #54252
This PR introduces a 3rd party validation tool to ensure that the
REST specification conforms to the schema.
The task is applied to the 3 projects that contain REST API specifications.
The plugin wires this task into the precommit commit task, and should be
considered as part of the public API for the build tools for any plugin
developer to contribute their plugin's specification.
An ignore parameter has been introduced for the task to allow specific
file to be ignored from the validation. The ignored files in this PR
will soon get issues logged and a link so they can be fixed.
Closes#54314
Out of the box "access granted" audit events are not logged
for system users. The list of system users was stale and included
only the _system and _xpack users. This commit expands this list
with _xpack_security and _async_search, effectively reducing the
auditing noise by not logging the audit events of these system
users out of the box.
Closes#37924
The testPerformAction test has been failing periodically due to
how Hamcrest's containsStringIgnoringCase does not lowercase using
the same Locale set in the test infrastructure.
This commit falls back to explicitly lowercasing using the root
locale
This commit adds a new GeoShapeBoundsAggregator to the spatial plugin and registers it with the GeoShapeValuesSourceType. This enables geo_bounds aggregations on geo_shape fields
This change fixes problem with updating Index Templates V2.
Validatation added in #54933 didn't filter list of conflicting templates correctly so
new template was always clashing with itself unless patterns were not changed completely.
Introduces InstantiatingObjectParser which is similar to the
ConstructingObjectParser, but instantiates the object using its constructor
instead of a builder function.
Closes#52499
After #53562, the `geo_shape` field mapper is registered within
a module. This opens the door for introducing a new `geo_shape`
field mapper into the Spatial Plugin that has doc-values support.
This is very much an extension of server's GeoShapeFieldMapper,
but with the addition of the doc values implementation.
Documents several parameters missing from the bulk API's response body
docs. Also moves several response-related chunks of text to the response
body section.
Relates to #55237
Data frame analytics process currently reports progress as
an integer `progress_percent`. We parse that and report it
from the _stats API as the progress of the `analyzing` phase.
However, we want to allow the DFA process to report progress
for more than one phase. This commit prepares for this by
parsing `phase_progress` from the process, an object that
contains the `phase` name plus the `progress_percent` for that
phase.
Backport of #55580
Instead of doing our own checks against REST status, shard counts, and shard failures, this commit changes all our extractor search requests to set `.setAllowPartialSearchResults(false)`.
- Scrolls are automatically cleared when a search failure occurs with `.setAllowPartialSearchResults(false)` set.
- Code error handling is simplified
closes https://github.com/elastic/elasticsearch/issues/40793
Issue #55521 suggested that audit messages were not written when
closing an unassigned job. This is not the case, but we didn't
have a test to prove it.
Backport of #55571
There is no guarantee the observer and subsequent CS update will execute
before we move on to the next test here and we ahve to wait for the observer + CS update cycle to complete
before moving on to the next test.
closes#55481
The ML info endpoint returns the max_model_memory_limit setting
if one is configured. However, it is still possible to create
a job that cannot run anywhere in the current cluster because
no node in the cluster has enough memory to accommodate it.
This change adds an extra piece of information,
limits.effective_max_model_memory_limit, to the ML info
response that returns the biggest model memory limit that could
be run in the current cluster assuming no other jobs were
running.
The idea is that the ML UI will be able to warn users who try to
create jobs with higher model memory limits that their jobs will
not be able to start unless they add a bigger ML node to their
cluster.
Backport of #55529
Adds a "node" field to the response from the following endpoints:
1. Open anomaly detection job
2. Start datafeed
3. Start data frame analytics job
If the job or datafeed is assigned to a node immediately then
this field will return the ID of that node.
In the case where a job or datafeed is opened or started lazily
the node field will contain an empty string. Clients that want
to test whether a job or datafeed was opened or started lazily
can therefore check for this.
Backport of #55473
PEMUtils would incorrectly fill the encryption password with zeros
(the '\0' character) after decrypting a PKCS#8 key.
Since PEMUtils did not take ownership of this password it should not
zero it out because it does not know whether the caller will use that
password array again. This is actually what PEMKeyConfig does - it
uses the key encryption password as the password for the ephemeral
keystore that it creates in order to build a KeyManager.
Backport of: #55457
Adds an example for bulk API requests that include failures.
Also documents guidance on use the `filter_path` parameter
to narrow the bulk API response for errors.
Closes#55237
If some of the nodes are pre-7.8 nodes, they may not support the `index.prefer_v2_templates`
setting (since it exists only on 7.8+). This commit makes sure that we only add this setting to the
index's metadata if all of the nodes are 7.8+. Otherwise we get errors like:
```
[ WARN ][o.e.i.c.IndicesClusterStateService] [v7.7.0-3] [test-snapshot-index][2] marking and sending shard failed due to [failed to create index]
java.lang.IllegalArgumentException: unknown setting [index.prefer_v2_templates] please check that any required plugins are installed, or check the breaking changes documentation for removed settings
at org.elasticsearch.common.settings.AbstractScopedSettings.validate(AbstractScopedSettings.java:544) ~[elasticsearch-7.7.0-SNAPSHOT.jar:7.7.0-SNAPSHOT]
at org.elasticsearch.common.settings.AbstractScopedSettings.validate(AbstractScopedSettings.java:489) ~[elasticsearch-7.7.0-SNAPSHOT.jar:7.7.0-SNAPSHOT]
at org.elasticsearch.common.settings.AbstractScopedSettings.validate(AbstractScopedSettings.java:460) ~[elasticsearch-7.7.0-SNAPSHOT.jar:7.7.0-SNAPSHOT]
at org.elasticsearch.indices.IndicesService.createIndexService(IndicesService.java:595) ~[elasticsearch-7.7.0-SNAPSHOT.jar:7.7.0-SNAPSHOT]
at org.elasticsearch.indices.IndicesService.createIndex(IndicesService.java:549) ~[elasticsearch-7.7.0-SNAPSHOT.jar:7.7.0-SNAPSHOT]
at org.elasticsearch.indices.IndicesService.createIndex(IndicesService.java:176) ~[elasticsearch-7.7.0-SNAPSHOT.jar:7.7.0-SNAPSHOT]
at org.elasticsearch.indices.cluster.IndicesClusterStateService.createIndices(IndicesClusterStateService.java:484) [elasticsearch-7.7.0-SNAPSHOT.jar:7.7.0-SNAPSHOT]
at org.elasticsearch.indices.cluster.IndicesClusterStateService.applyClusterState(IndicesClusterStateService.java:246) [elasticsearch-7.7.0-SNAPSHOT.jar:7.7.0-SNAPSHOT]
at org.elasticsearch.cluster.service.ClusterApplierService.lambda$callClusterStateAppliers$5(ClusterApplierService.java:517) [elasticsearch-7.7.0-SNAPSHOT.jar:7.7.0-SNAPSHOT]
```
Relates to #55411
Relates to #53101Resolves#55539
In the test after the first load event is is not known which models are cached as
loading a later one will evict an earlier one and the order is not known.
The models could have been loaded 1 or 2 times not exactly twice
Removes the 'Testing' chapter from the Elasticsearch Reference guide.
This chapter was originally written for so that users using the Java HLRC client could
use the same test classes when testing Elasticsearch in their own applications.
However, this is no longer the case or recommended.
Closes#55257.
This change folds the removal of the in-progress snapshot entry
into setting the safe repository generation. Outside of removing
an unnecessary cluster state update, this also has the advantage
of removing a somewhat inconsistent cluster state where the safe
repository generation points at `RepositoryData` that contains a
finished snapshot while it is still in-progress in the cluster
state, making it easier to reason about the state machine of
upcoming concurrent snapshot operations.
In the documentation of `pre_filter_shard_size` we state that the pre-filter
phase will be executed if the request targets more than 128 shards, yet in the
current test randomization we check that the
TransportSearchAction.shouldPreFilterSearchShards is true already for 127
targeted shards. This should be raised to 129 instead.
Closes#55514
Today a read-only engine requires a complete history of operations, in the
sense that its local checkpoint must equal its maximum sequence number. This is
a valid check for read-only engines that were obtained by closing an index
since closing an index waits for all in-flight operations to complete. However
a snapshot may not have this property if it was taken while indexing was
ongoing, but that's ok.
This commit weakens the check for a complete history to exclude the case of a
searchable snapshot.
Relates #50999
This change ensures that we return the latest expiration time
when retrieving the response from the index.
This commit also fixes a bug that stops the garbage collection of saved responses if the async search index is deleted.
Validate adding global V2 templates don't configure the index.hidden setting.
This also prevents updating the component template to add the index.hidden
setting if that component template is referenced by a global index template.
(cherry picked from commit 2e768981809887649f49d265d039f056985f7e6a)
Signed-off-by: Andrei Dan <andrei.dan@elastic.co>
To read from GCS repositories we're currently using Google SDK's official BlobReadChannel,
which issues a new request every 2MB (default chunk size for BlobReadChannel) using range
requests, and fully downloads the chunk before exposing it to the returned InputStream. This
means that the SDK issues an awfully high number of requests to download large blobs.
Increasing the chunk size is not an option, as that will mean that an awfully high amount of
heap memory will be consumed by the download process.
The Google SDK does not provide the right abstractions for a streaming download. This PR
uses the lower-level primitives of the SDK to implement a streaming download, similar to what
S3's SDK does.
Also closes#55505
Peer recovery fails if the primary does not see the recovering replica
in the replication group (when the cluster state update on the primary
is delayed). To verify the local recovery stats, we have to remember
this value in the first try because the local recovery happens once, and
its stats is reset when the recovery fails.
Closes#54829
If more than 100 shard-follow tasks are trying to connect to the remote
cluster, then some of them will abort with "connect listener queue is
full". This is because we retry on ESRejectedExecutionException, but not
on RejectedExecutionException.
The systemd extender is a scheduled execution that ensures we
repeatedly let systemd know during startup that we are still starting
up. We cancel this scheduled execution once the node has successfully
started up. This extender is wrapped in a set once, which we expose
directly. This commit addresses this by putting the extender behind a
getter, which hides the implementation detail that the extener is
wrapped in a set once. This cleans up some issues in tests, that
ensures we are not making assertions about the set once, but instead
about the extender.
When Elasticsearch is starting up, we schedule a thread to repeatedly
let systemd know that we are still in the process of starting up. Today
we use a non-final field for this. This commit changes this to be a set
once so we can mark the field as final, and get stronger guarantees when
reasoning about the state of execution here.
`updateAndGet` could actually call the internal method more than once on contention.
If I read the JavaDocs, it says:
```* @param updateFunction a side-effect-free function```
So, it could be getting multiple updates on contention, thus having a race condition where stats are double counted.
To fix, I am going to use a `ReadWriteLock`. The `LongAdder` objects allows fast thread safe writes in high contention environments. These can be protected by the `ReadWriteLock::readLock`.
When stats are persisted, I need to call reset on all these adders. This is NOT thread safe if additions are taking place concurrently. So, I am going to protect with `ReadWriteLock::writeLock`.
This should prevent race conditions while allowing high (ish) throughput in the highly contention paths in inference.
I did some simple throughput tests and this change is not significantly slower and is simpler to grok (IMO).
closes https://github.com/elastic/elasticsearch/issues/54786