Move some test methods from AnalylzeActionIT to RestAnalyzeActionTest
Allow string explain param if it can parse
Fix wrong param name in rest-api-spec
Closes#16925
Internally the put pipeline API uses this information in node info API to validate if all specified processors in a pipeline exist on all nodes in the cluster.
Elasticsearch 5.0 doesn't support indices wiht legacy checksums anymore.
The last time we write legacy checksums was in 1.3.0 which was based
on lucene 4.9 already which means that all files have CRC32 checksums.
All indices that Elasticsearch can read today must be written with
lucene version >= 4.8 anyway so we can drop this layer of backwards
compatibility entirely.
Since we are close to upgrading to Lucene 6.0 we should get rid of this
in a more contiained change than the lucene upgrade.
The `ingest_took` is separate from `took`, which keeps track how much time is spent on indexing/deleting/updating.
The `ingest_took` is only visible in the rest response if at least for one bulk item has ingest enabled.
`catch: param` is designed to catch errors generated by client-side validation logic when users don't supply valid parameters to an API request. This test though is testing the server-side validation of pipeline aggregations, and so a "param" catch is invalid. Instead we will just test for a parse_exception error type using a regex.
Expose http address in cat/nodes and cat/nodeattrs APIs
We expose a lot of information like IP address and port but never
expose the http address/ip:port in the CAT API. It's nice to have it
there too since otherwise json parsing is required to get this information
We expose a lot of information like IP address and port but never
expose the http address/ip:port in the CAT API. It's nice to have it
there too since otherwise json parsing is required to get this information
Elasticsearch should reject ids that are this long, to ensure a document
always remains retrievable for clients that impose a maximum URI length
Closes#16034
The `keyword` field is intended to replace `not_analyzed` string fields. It is
indexed and has doc values by default, and doesn't support enabling term
vectors.
Although it doesn't support setting an analyzer for now, there are plans for
it to support basic normalization in the future such as case folding.
Only tasks that extend CancellableTask can be cancelled using this mechanism. If a cancellable task has children it can elect to cancel all child tasks as well. In this case a special ban parent request is sent to all nodes. This request does two things: 1) it prevents any tasks with the banned parent task from being started, and 2) it cancels all currently running tasks that have the banned task as a parent. The ban is lifted as soon as the coordinating node notifies all other nodes that the cancelled task has finished executing. If the coordinating node leaves the cluster before it has a chance to lift its bans, all bans set by this coordinating node are automatically removed.
As an option a task can elect to automatically cancel all child tasks if their parent task was running on a node that just left the cluster. This option makes sense for cancellable heavy tasks that have no side-effects and only return results to the coordinating node. With the coordinating node gone, it doesn't make sense to run such tasks any longer since their results will be most likely discarded.
The cat API previously used the Content-Type header field for
determining the media type of the response. This is in opposition to the
HTTP spec which specifies the Accept header field for this purpose. This
commit replaces the use of the Content-Type header field with the Accept
header field in the cat API.
Closes#14421
This processor is useful when all elements of a json array need to be processed in the same way.
This avoids that a processor needs to be defined for each element in an array.
Also it is very likely that it is unknown how many elements are inside an json array.
Retrieving distributed DF for TermVectors is beside it's esotheric justification
a very slow process and can cause serious load on the cluster. We also don't have nearly
enough testing for this stuff and given the complexity we should remove it rather than carrying it
around.
When there is an exception thrown during pipeline creation within
Rest calls (in put pipeline, and simulate) We now return a structured
error response to the user with details around which processor's
configuration is the cause of the issue, or which configuration property
is misconfigured, etc.
We used to have a disabled test around cluster put settings as it left cluster settings behind without a way to remove them. That has been in fixed in the cluster put settings api, so the test can be re-enabled.
The search_after parameter provides a way to efficiently paginate from one page to the next. This parameter accepts an array of sort values, those values are then used by the searcher to sort the top hits from the first document that is greater to the sort values.
This parameter must be used in conjunction with the sort parameter, it must contain exactly the same number of values than the number of fields to sort on.
NOTE: A field with one unique value per document should be used as the last element of the sort specification. Otherwise the sort order for documents that have the same sort values would be undefined. The recommended way is to use the field `_uuid` which is certain to contain one unique value for each document.
Fixes#8192
Merge feature/ingest branch into master branch.
This adds the ingest feature to ES that allows to preprocess document before indexing on an ingest node.
By default a node is an ingest node. Documents are preprocessed via a pipeline. A pipeline consists
out of one or more processors Each processor makes one or more modifications to a document processed.
There are many types of processors available out-of-the-box that are designed to make a specific change to a document being processed. In a cluster many pipeline can be configured via dedicated pipeline APIs. An new option on the bulk
and index APIs allows to control what pipeline is picked for preprocessing. If no pipeline is specified then the ingest
feature is skipped and no preprocessing takes place.
Site plugins used to be used for things like kibana and marvel, but
there is no longer a need since kibana (and marvel as a kibana plugin)
uses node.js. This change removes site plugins, as well as the flag for
jvm plugins. Now all plugins are jvm plugins.