The documentation of `search_after` recommends to use the `_id`
field as a tiebreaker for the sort without warning against
the additional memory required. This change changes the recommandation
to use a copy of the `_id` field with doc_values enabled.
This adds a `wait_for_completion` flag which allows the user to block
the Stop API until the task has actually moved to a stopped state,
instead of returning immediately. If the flag is set, a `timeout` parameter
can be specified to determine how long (at max) to block the API
call. If unspecified, the timeout is 30s.
If the timeout is exceeded before the job moves to STOPPED, a
timeout exception is thrown. Note: this is just signifying that the API
call itself timed out. The job will remain in STOPPING and evenutally
flip over to STOPPED in the background.
If the user asks the API to block, we move over the the generic
threadpool so that we don't hold up a networking thread.
* Adds HLRC docs for put lifecycle policy
* Adds link to docs in client javadocs
* Fixes checkstyle
* Make the documentation use the right ack response
* [ILM] Add documentation for error handling in ILM
This adds some initial documentation for error handling and retrying failed
steps for index lifecycle management
Implement high level client for migration upgrade API. It should wrap
RestHighLevelClient and expose high level IndexUpgradeRequest (new),
IndexTaskResponse for submissions with wait_for_completion=false and
BulkByScrollResponse (already used) objects.
refers: #29827
Today it is unclear that the `storage_class` parameter to an S3 repository only
affects new objects and does not rewrite any existing objects. This commit
clarifies this point.
We changed the way realm settings are defined, and this affects custom
realms in SecurityExtensions. This change adds those details to the
breaking changes docs.
Relates: #30241
* [DOCS] ILM API Ref edits
* [DOCS] Fixed endpoint for DELETE policy.
* [DOCS] Removed comparison to setting index.lifecycle.name to null.
* [DOCS] Fixed xrefs to explain API.
Today our OS information returned in node stats only returns a
high-level name of the OS (e.g., "Linux"). Yet, for some uses this is
too high-level and knowing at a finer level of granularity the
underlying OS can be useful. This commit extracts the pretty name on
Linux from /etc/os-release. This pretty name usually includes the Linux
vendor and the Linux vendor version number (e.g., Fedora 28).
Currently we introduced a hard limit of 1024 to the number of fields a query can
be expanded to in #26541. Instead of using a hard limit, we should make this
configurable. This change removes the hard limit check and uses the existing
`max_clause_count` setting instead.
Closes#34778
If the underlying mount point for the JNA temporary directory is mounted
noexec on Linux, then the JVM will not be able to map the native code in
as executable. This will prevent JNA from executing and will prevent
Elasticsearch from being able to execute some functions that rely on
native code (e.g., memory locking, and installing system call
filters). We do not want to get into the business of catching exceptions
and parsing messages towards this because these exception messages can
change on us. We also do not want to jump through a lot of hoops to
check the underlying mount point for noexec. Instead, we will rely on
documentation to address this problem. This commit adds to the important
system configuration section of the docs that the JNA temporary
directory is not on a mount point with the noexec mount option.
This commit uses the index settings version so that a follower can
replicate index settings changes as needed from the leader.
Co-authored-by: Martijn van Groningen <martijn.v.groningen@gmail.com>
* Moved `AcknowledgedResponse` to core package
* Made AcknowledgedResponse not abstract and provided a default parser,
so that in cases when the field name is not overwritten then there
is no need for a subclass.
Relates to #33824
Sometimes users are confused about whether they can use the Convert Processor
for changing an existing fields type to other types even if the existing one is already
ingested. This confusion is from the first line of description. Changing this and also
adding a some detail to the code snippet.
With this change, `Version` no longer carries information about the qualifier,
we still need a way to show the "display version" that does have both
qualifier and snapshot. This is now stored by the build and red from `META-INF`.
We've decided that the bulk, delete, get, index, update, and search APIs should not
contain this request parameter, and we will instead accept both typed and typeless calls.
This change adds support for clearing the cache of a realm. The realms
cache may contain a stale set of credentials or incorrect role
assignment, which can be corrected by clearing the cache of the entire
realm or just that of a specific user.
Relates #29827
The remove-ilm-from-index API was using the DELETE http method
to signify that something is being removed. Although, metadata
about ILM for the index is being deleted, no entity/resource
is being deleted during this operation. POST is more in line with
what this API is actually doing, it is modifying the metadata for
an index. As part of this change, `remove` is also appended to the path
to be more explicit about its actions.
This moves all Realm settings to an Affix definition.
However, because different realm types define different settings
(potentially conflicting settings) this requires that the realm type
become part of the setting key.
Thus, we now need to define realm settings as:
xpack.security.authc.realms:
file.file1:
order: 0
native.native1:
order: 1
- This is a breaking change to realm config
- This is also a breaking change to custom security realms (SecurityExtension)
We have an example in `reindex`'s docs about copying from many indices
at once. It doesn't work at the moment because we only allow a single
type per index. We didn't notice it in the docs tests because those
tests didn't copy any documents. This change:
1. Adds documents to the docs tests to fully exercise the snippet.
2. Fixes the example by moving all copied documents to the same type.
3. Moves the note about id collisions and expands on it because it is
even more likely than before.
Closes#35150
This commit removes the Joda time usage from ILM and the HLRC components of ILM.
It also fixes an issue where using the `?human=true` flag could have caused the
parser not to work. These millisecond fields now follow the standard we use
elsewhere in the code, with additional fields added iff the `human` flag is
specified.
This is a breaking change for ILM, but since ILM has not yet been released, no
compatibility shim is needed.
This changes the current script.max_size_in_bytes to be dynamic so it can be
set through the cluster settings API. This setting is also applied to inline scripts
in the compile method of ScriptService to prevent excessively long inline
scripts from being compiled. The script length limit is removed from Painless as
this is no longer necessary with the protection in compile.
With this commit we differentiate between permanent circuit breaking
exceptions (which require intervention from an operator and should not
be automatically retried) and transient ones (which may heal themselves
eventually and should be retried). Furthermore, the parent circuit
breaker will categorize a circuit breaking exception as either transient
or permanent based on the categorization of memory usage of its child
circuit breakers.
Closes#31986
Relates #34460
The repository plugin for Openstack Swift was developed originally by Wikimedia foundation but is now retired.
This changes the link to the repo where the actively maintained fork lives now.
When we connect to remote clusters, there may be a few more routers/firewalls in-between compared to when we connect to nodes in the same cluster. We've experienced cases where firewalls drop connections completely and keep-alives seem not to be enough, or they are not properly configured. With this commit we allow to enable application-level pings specifically from CCS nodes to the selected remote nodes through the new setting `cluster.remote.${clusterAlias}.transport.ping_schedule`. The new setting is similar `transport.ping_schedule` but it does not affect intra-cluster communication, pings are only sent to specific remote cluster when specifically enabled, as they are disabled by default.
Relates to #34405
otherwise this throws an exception: java.lang.IllegalArgumentException: Rejecting mapping update to [seats] as the final mapping would have more than 1 type: [seat, _doc]
May be, later for the types removal, we can modify `seats.json` to have a type `_doc` instead of `seat`
This adds the security `_authenticate` API to the HLREST client.
It is unlike some of the other APIs because the request does not
have a body.
The commit also creates the `User` entity. It is important
to note that the `User` entity does not have the `enabled`
flag. The `enabled` flag is part of the response, alongside
the `User` entity.
Moreover this adds the `SecurityIT` test class
(extending `ESRestHighLevelClientTestCase`).
Relates #29827
This PR renames the CRUD APIS for ILM
GET _ilm/<policy>, _ilm -> _ilm/policy/<policy>, _ilm/policy
PUT _ilm/<policy> -> _ilm/policy/<policy>
DELETE _ilm/<policy> -> _ilm/policy/<policy>
closes#34929.
The `random_score` function produces values between 0 (inclusive) and 1
(exclusive) and documented it with fancy methematical range notation. It
is so fancy I thought it was a typo. This changes the documentation to
use words.
Relates to #35084
This changes the RollupSearch endpoint to proactively resolve index
patterns. If the index pattern(s) match more than one rollup index,
an exception is throw as before. But if the pattern only matches one
rollup index, execution is allowed to continue (unlike before where
it would assume all patterns were for raw data).
This also allows the search endpoint to resolve aliases that point to
a rollup index.
Also tweaks the documentation to make this clear.
Closes#34828
* Remove a tip about ignore_above that only makes sense with multiple types.
* Remove a line from the percolator documentation that refers to multiple types.
This commit adds a new single value metric aggregation that calculates
the statistic called median absolute deviation, which is a measure of
variability that works on more types of data than standard deviation
Our calculation of MAD is approximated using t-digests. In the collect
phase, we collect each value visited into a t-digest. In the reduce
phase, we merge all value t-digests, then create a t-digest of
deviations using the first t-digest's median and centroids
Bulk Request in High level rest client should be consistent with what is
possible in Rest API, therefore should support global parameters. Global
parameters are passed in URL in Rest API.
Some parameters are mandatory - index, type - and would fail validation
if not provided before before the bulk is executed.
Optional parameters - routing, pipeline.
The usage of these should be consistent across sync/async execution,
bulk processor and BulkRequestBuilder
closes#26026
When combine_script and reduce_script were made into required
parameters for Scripted Metric aggregations in #33452, the docs were
not updated to reflect that. This marks those parameters as required
in the documentation.
Deprecates `_source_include` and `_source_exclude` url parameters
in favor of `_source_inclues` and `_source_excludes` because those
are consistent with the rest of Elasticsearch's APIs.
Relates to #22792
This further applies the pattern set in #34125 to reduce copy-and-paste
in the multi-document CRUD portion of the High Level REST Client docs.
It also adds line wraps to snippets that are too wide to fit into the box
when rendered in the docs, following up on the work started in #34163.
This commit fixes two issues with the CCR API specification:
- remove the CCR stats endpoint, it is not currently implemented
- fix the documentation links
The file structure finder endpoint can find the NDJSON
(newline-delimited JSON) file format, but called it
`json`. This change renames the `format` for this file
structure to `ndjson`, which is more precise and will
hopefully avoid confusion.
* Changed the auto follow stats to also include follow stats.
* Renamed the auto follow stats api to stats api and changed its url path
from `/_ccr/auto_follow/stats` `/_ccr/stats`.
* Removed `/_ccr/stats` url path for the follow stats api, which makes
the index parameter a required parameter.
* Fixed docs.
Adds support for the Clear Roles Cache API to the High Level Rest
Client. As part of this a helper class, NodesResponseHeader, has been
added that enables parsing the nodes header from responses that are
node requests.
Relates to #29827
This commit is our first introduction to cross-cluster replication
docs. In this commit, we introduce the cross-cluster replication API
docs. We also add skelton docs for additional content that will be added
in a series of follow-up commits.
Documents the new structured logfile format for auditing
that was introduced by #31931. Most changes herein
are for 6.x . In 7.0 the deprecated format is gone and a
follow-up PR is in order.
This change adds a section about the global search setting
`indices.query.bool.max_clause_count` that limits the number of boolean clauses
allowed in a Lucene BooleanQuery.
Closes#19858
These were disabled because ingest attachement was not playing well with
JDK 11. We have addressed that by upgrading the Tika dependency, yet
forgot to reenable these. This commit enables these tests again.
In a future major version, we will be introducing a soft limit on the
number of shards in a cluster based on the number of nodes in the
cluster. This limit will be configurable, and checked on operations
which create or open shards and issue a warning if the operation would
take the cluster over the limit.
There is an option to enable strict enforcement of the limit, which
turns the warnings into errors. In a future release, the option will be
removed and strict enforcement will be the default (and only) behavior.
- Restrict visibility of Aggregators and Factories
- Move PipelineAggregatorBuilders up a level so it is consistent with
AggregatorBuilders
- Checkstyle line length fixes for a few classes
- Minor odds/ends (swapping to method references, formatting, etc)
We should delete a job by directly talking to the allocated
task and telling it to shutdown. Today we shut down a job
via the persistent task framework. This is not ideal because,
while the job has been removed from the persistent task
CS, the allocated task continues to live until it gets the
shutdown message.
This means a user can delete a job, immediately delete
the rollup index, and then see new documents appear in
the just-deleted index. This happens because the indexer
in the allocated task is still running and indexes a few
more documents before getting the shutdown command.
In this PR, the transport action is changed to a TransportTasksAction,
and we invoke onCancelled() directly on the matching job.
The race condition still exists after this PR (albeit less likely),
but this was a precursor to fixing the issue and a self-contained
chunk of code. A second PR will followup to fix the race itself.
Extend querying support on multiple indices from being strictly
identical to being just compatible.
Use FieldCapabilities API (extended through #33803) for mapping merging.
Close#31837#31611
Implement the functionality to translate the
`field IN (value1, value2,...)` expressions to proper Lucene queries
or painless script or local processors depending on the use case.
The `IN` expression can be used in SELECT, WHERE and HAVING clauses.
Closes: #32955
`CONVERT` works exactly like cast with slightly different syntax:
`CONVERT(<value>, <data_type)` as opposed to `CAST(<value> AS <data_type>)`
Moreover it support format of the MS-SQL data types `SQL_<type>`,
e.g.: `SQL_INTEGER`
Closes: #34513
* Replace custom type names with _doc in REST examples.
* Avoid using two mapping types in the percolator docs.
* Rename doc -> _doc in the main repository README.
* Also replace some custom type names in the HLRC docs.
This commit switches to using a trial license in the docs tests that run
on the default distribution. This is needed so that docs tests can be
executed against non-basic features.
With remote clusters taking on a larger role, we have make the
infrastructure more generic than being tied to cross-cluster search
(CCS). We want to refer to the remote clusters configuration in the
cross-cluster replication (CCR) docs. Yet, these docs are still tied to
CCS. This commit extracts the remote clusters docs from CCS (with some
wording changes to make them more general) so that we can refer to them
in the CCR docs.
When a envelope that crosses the dateline is specified as a part of
geo_shape query is parsed it shouldn't have its left and right points
flipped.
Fixes#34418
Make SQL aware of missing and/or unmapped fields treating them as NULL
Make _all_ functions and operators null-safe aware, including when used
in filtering or sorting contexts
Add missing and null-safe doc value extractor
Modify dataset to have null fields spread around (in groups of 10)
Enforce missing last and unmapped_type inside sorting
Consolidate Predicate templating and declaration
Add support for Like/RLike in scripting
Generalize NULLS LAST/FIRST
Introduce early schema declaration for CSV spec tests: to keep the doc
snippets in place (introduce schema:: prefix for declaration)
upfront.
Fix#32079
The `term` and `phrase` suggesters have different options to filter candidates
based on their frequencies. The `popular` mode for instance filters candidate
terms that occur in less docs than the original term. However when we compute this threshold
we use the total term frequency of a term instead of the document frequency. This is not inline
with the actual filtering which is always based on the document frequency. This change fixes
this discrepancy and clarifies the meaning of the different frequencies in use in the suggesters.
It also ensures that the threshold doesn't overflow the maximum allowed value (Integer.MAX_VALUE).
Closes#34282
Add example for selectively clearing just the request, query or fielddata cache
and for selectively clearing the cache for specific fields.
Closes#34287
* Adding new xpack.ml.max_lazy_ml_nodes setting to docs
* Fixing docs, making it clearer what the setting does
* Adding note about external process need
We'd disabled them because we didn't have a way to clean up after each
test. I implemented #34342 which adds the clean ups so now we can
re-enable the tests.
In the `setup` sections we have to use `raw` requests instead of
`x-pack` requests because we don't have the json config for x-pack.
Closes#33319
This commit moves the definition of domainSplit into java and exposes it
as a painless whitelist extension. The method also no longer needs
params, and version which ignores params is added and deprecated.
Introduces client-specific request and response classes that do not
depend on the server
The `type` parameter is named `licenseType` in the response class to be
more descriptive. The parts that make up the acknowledged-required
response are given slightly different names than their server-response
types to be consistent with the naming in the put license API
Tests do not cover all cases because the integ test cluster starts up
with a trial license - this will be addressed in a future commit
Tweak the upgrade instructions for moving from pre-6.3-with-x-pack to
post-6.3-default distribution. Specifically, you have to remove the
x-pack plugin before upgrading because 6.4 doesn't understand how to
remove it.
Relates to #34307
* Replace deprecated field `code` with `source` for stored scripts (#25127)
* Replace examples using the deprecated endpoint `{index}/{type}/_search`
with `{index}/_search` (#29468)
* Use a system property to avoid deprecation warnings after the Update
Scripts have been moved to their own context (#32096)
This change disallows negative query boosts. Negative scores are not allowed in Lucene 8 so
it is easier to just disallow negative boosts entirely. We should also deprecate negative boosts
in 6x in order to ensure that users are aware when they'll upgrade to ES 7.
Relates #33309
We added support for role mapper expression DSL in #33745,
that allows us to build the role mapper expression used in the
role mapping (as rules for determining user roles based on what
the boolean expression resolves to).
This change now adds support for create/update role mapping
API to the high-level rest client.
The ingest pipeline that is produced is very simple. It
contains a grok processor if the format is semi-structured
text, a date processor if the format contains a timestamp,
and a remove processor if required to remove the interim
timestamp field parsed out of semi-structured text.
Eventually the UI should offer the option to customize the
pipeline with additional processors to perform other data
preparation steps before ingesting data to an index.
* New OCTET_LENGTH function
* Changed the way the FunctionRegistry stores functions, considering the alphabetic ordering by name
* Added documentation for the RANDOM function
* HLRC: ML Add preview datafeed api
* Changing deprecation handling for parser
* Removing some duplication in docs, will address other APIs in another PR
* HLRC: ML Cleanup docs
* updating get datafeed stats docs
This further applies the pattern set in #34125 to reduce copy-and-paste
in the single document CRUD portion of the High Level REST Client docs.
It also adds line wraps to snippets that are too wide to fit into the box
when rendered in the docs, following up on the work started in #34163.
The "lookupUser" method on a realm facilitates the "run-as" and
"authorization_realms" features.
This commit allows a realm to be used for "lookup only", in which
case the "authenticate" method (and associated token methods) are
disabled.
It does this through the introduction of a new
"authentication.enabled" setting, which defaults to true.
Building automatons can be costly. For the most part we cache things
that use automatons so the cost is limited.
However:
- We don't (currently) do that everywhere (e.g. we don't cache role
mappings)
- It is sometimes necessary to clear some of those caches which can
cause significant CPU overhead and processing delays.
This commit introduces a new cache in the Automatons class to avoid
unnecesarily recomputing automatons.