This adds a new `/_cluster/allocation/explain` API that explains why a
shard can or cannot be allocated to nodes in the cluster. Additionally,
it will show where the master *desires* to put the shard, according to
the `ShardsAllocator`.
It looks like this:
```
GET /_cluster/allocation/explain?pretty
{
"index": "only-foo",
"shard": 0,
"primary": false
}
```
Though, you can optionally send an empty body, which means "explain the
allocation for the first unassigned shard you find".
The output when a shard is unassigned looks like this:
```
{
"shard" : {
"index" : "only-foo",
"index_uuid" : "KnW0-zELRs6PK84l0r38ZA",
"id" : 0,
"primary" : false
},
"assigned" : false,
"unassigned_info" : {
"reason" : "INDEX_CREATED",
"at" : "2016-03-22T20:04:23.620Z"
},
"nodes" : {
"V-Spi0AyRZ6ZvKbaI3691w" : {
"node_name" : "Susan Storm",
"node_attributes" : {
"bar" : "baz"
},
"final_decision" : "NO",
"weight" : 0.06666675,
"decisions" : [ {
"decider" : "filter",
"decision" : "NO",
"explanation" : "node does not match index include filters [foo:\"bar\"]"
} ]
},
"Qc6VL8c5RWaw1qXZ0Rg57g" : {
"node_name" : "Slipstream",
"node_attributes" : {
"bar" : "baz",
"foo" : "bar"
},
"final_decision" : "NO",
"weight" : -1.3833332,
"decisions" : [ {
"decider" : "same_shard",
"decision" : "NO",
"explanation" : "the shard cannot be allocated on the same node id [Qc6VL8c5RWaw1qXZ0Rg57g] on which it already exists"
} ]
},
"PzdyMZGXQdGhqTJHF_hGgA" : {
"node_name" : "The Symbiote",
"node_attributes" : { },
"final_decision" : "NO",
"weight" : 2.3166666,
"decisions" : [ {
"decider" : "filter",
"decision" : "NO",
"explanation" : "node does not match index include filters [foo:\"bar\"]"
} ]
}
}
}
```
And when the shard *is* assigned, the output looks like:
```
{
"shard" : {
"index" : "only-foo",
"index_uuid" : "KnW0-zELRs6PK84l0r38ZA",
"id" : 0,
"primary" : true
},
"assigned" : true,
"assigned_node_id" : "Qc6VL8c5RWaw1qXZ0Rg57g",
"nodes" : {
"V-Spi0AyRZ6ZvKbaI3691w" : {
"node_name" : "Susan Storm",
"node_attributes" : {
"bar" : "baz"
},
"final_decision" : "NO",
"weight" : 1.4499999,
"decisions" : [ {
"decider" : "filter",
"decision" : "NO",
"explanation" : "node does not match index include filters [foo:\"bar\"]"
} ]
},
"Qc6VL8c5RWaw1qXZ0Rg57g" : {
"node_name" : "Slipstream",
"node_attributes" : {
"bar" : "baz",
"foo" : "bar"
},
"final_decision" : "CURRENTLY_ASSIGNED",
"weight" : 0.0,
"decisions" : [ {
"decider" : "same_shard",
"decision" : "NO",
"explanation" : "the shard cannot be allocated on the same node id [Qc6VL8c5RWaw1qXZ0Rg57g] on which it already exists"
} ]
},
"PzdyMZGXQdGhqTJHF_hGgA" : {
"node_name" : "The Symbiote",
"node_attributes" : { },
"final_decision" : "NO",
"weight" : 3.6999998,
"decisions" : [ {
"decider" : "filter",
"decision" : "NO",
"explanation" : "node does not match index include filters [foo:\"bar\"]"
} ]
}
}
}
```
Only "NO" decisions are returned by default, but all decisions can be
shown by specifying the `?include_yes_decisions=true` parameter in the
request.
Resolves#14593
In #17198, we removed suggest transport action, which
used the `suggest` threadpool to execute requests. Now
`suggest` threadpool is unused and suggest requests are
executed on the `search` threadpool.
In 5.0 we don't allow index settings to be specified on the node level ie.
in yaml files or via commandline argument. This can cause problems during
upgrade if this was used extensively. For instance if analyzers where
specified on a node level this might cause the index to be closed when
imported (see #17187). In such a case all indices relying on this
must be updated via `PUT /${index}/_settings`. Yet, this API has slightly
different semantics since it overrides existing settings. To make this less
painful this change adds a `preserve_existing` parameter on that API to ensure
we have the same semantics as if the setting was applied on the node level.
This change also adds a better error message and a change to the migration guide
to ensure upgrades are smooth if index settings are specified on the node level.
If a index setting is detected this change fails the node startup and prints a message
like this:
```
*************************************************************************************
Found index level settings on node level configuration.
Since elasticsearch 5.x index level settings can NOT be set on the nodes
configuration like the elasticsearch.yaml, in system properties or command line
arguments.In order to upgrade all indices the settings must be updated via the
/${index}/_settings API. Unless all settings are dynamic all indices must be closed
in order to apply the upgradeIndices created in the future should use index templates
to set default values.
Please ensure all required values are updated on all indices by executing:
curl -XPUT 'http://localhost:9200/_all/_settings?preserve_existing=true' -d '{
"index.number_of_shards" : "1",
"index.query.default_field" : "main_field",
"index.translog.durability" : "async",
"index.ttl.disable_purge" : "true"
}'
*************************************************************************************
```
Also replaced the PercolatorQueryRegistry with the new PercolatorQueryCache.
The PercolatorFieldMapper stores the rewritten form of each percolator query's xcontext
in a binary doc values field. This make sure that the query rewrite happens only during
indexing (some queries for example fetch shapes, terms in remote indices) and
the speed up the loading of the queries in the percolator query cache.
Because the percolator now works inside the search infrastructure a number of features
(sorting fields, pagination, fetch features) are available out of the box.
The following feature requests are automatically implemented via this refactoring:
Closes#10741Closes#7297Closes#13176Closes#13978Closes#11264Closes#10741Closes#4317
This change adds the infrastructure to run the rest tests on a multi-node
cluster that users 2 different minor versions of elasticsearch. It doesn't implement
any dedicated BWC tests but rather leverages the existing REST tests.
Since we don't have a real version to test against, the tests uses the current version
until the first minor / RC is released to ensure the infrastructure works.
Relates to #14406Closes#17072
This commit adds fields bytes_recovered and files_recovered to the cat
recovery API. These fields, respectively, indicate the total number of
bytes and files recovered. Additionally, for consistency, some totals
fields and translog recovery fields have been renamed.
Closes#17064
The ingest stats include the following statistics:
* `ingest.total.count`- The total number of document ingested during the lifetime of this node
* `ingest.total.time_in_millis` - The total time spent on ingest preprocessing documents during the lifetime of this node
* `ingest.total.current` - The total number of documents currently being ingested.
* `ingest.total.failed` - The total number ingest preprocessing operations failed during the lifetime of this node
Also these stats are returned on a per pipeline basis.
_wait_for_completion defaults to false. If set to true then the API will
wait for all the tasks that it finds to stop running before returning. You
can use the timeout parameter to prevent it from waiting forever. If you
don't set a timeout parameter it'll default to 30 seconds.
Also adds a log message to rest tests if any tasks overrun the test. This
is just a log (instead of failing the test) because lots of tasks are run
by the cluster on its own and they shouldn't cause the test to fail. Things
like fetching disk usage from the other nodes, for example.
Switches the request to getter/setter style methods as we're going that
way in the Elasticsearch code base. Reindex is all getter/setter style.
Closes#16906
Move some test methods from AnalylzeActionIT to RestAnalyzeActionTest
Allow string explain param if it can parse
Fix wrong param name in rest-api-spec
Closes#16925
Internally the put pipeline API uses this information in node info API to validate if all specified processors in a pipeline exist on all nodes in the cluster.
The cluster stats api now returns counts for each node role. The `master_data`, `master_only`, `data_only` and `client` fields have been removed from the response in favour of `master`, `data`, `ingest` and `coordinating_only`. The same node can have multiple roles, hence contribute to multiple roles counts. Every node is implicitly a coordinating node, so whenever a node has no explicit roles, it will be counted as coordinating only.
_cat/nodes used to return `c` for client node or `d` for data node as part of the node.role column. This commit changes it to return `m` for master eligible, `d` for data and/or `i` for ingest. A node with no explicit roles will be a coordinating only node and marked with `-`. A node can obviously have multiple roles. The master column has been adapted to return only whether a node is the current master (`*`) or not (`-`).
Elasticsearch 5.0 doesn't support indices wiht legacy checksums anymore.
The last time we write legacy checksums was in 1.3.0 which was based
on lucene 4.9 already which means that all files have CRC32 checksums.
All indices that Elasticsearch can read today must be written with
lucene version >= 4.8 anyway so we can drop this layer of backwards
compatibility entirely.
Since we are close to upgrading to Lucene 6.0 we should get rid of this
in a more contiained change than the lucene upgrade.
The `ingest_took` is separate from `took`, which keeps track how much time is spent on indexing/deleting/updating.
The `ingest_took` is only visible in the rest response if at least for one bulk item has ingest enabled.
`catch: param` is designed to catch errors generated by client-side validation logic when users don't supply valid parameters to an API request. This test though is testing the server-side validation of pipeline aggregations, and so a "param" catch is invalid. Instead we will just test for a parse_exception error type using a regex.
Expose http address in cat/nodes and cat/nodeattrs APIs
We expose a lot of information like IP address and port but never
expose the http address/ip:port in the CAT API. It's nice to have it
there too since otherwise json parsing is required to get this information
We expose a lot of information like IP address and port but never
expose the http address/ip:port in the CAT API. It's nice to have it
there too since otherwise json parsing is required to get this information
Elasticsearch should reject ids that are this long, to ensure a document
always remains retrievable for clients that impose a maximum URI length
Closes#16034
The `keyword` field is intended to replace `not_analyzed` string fields. It is
indexed and has doc values by default, and doesn't support enabling term
vectors.
Although it doesn't support setting an analyzer for now, there are plans for
it to support basic normalization in the future such as case folding.
Only tasks that extend CancellableTask can be cancelled using this mechanism. If a cancellable task has children it can elect to cancel all child tasks as well. In this case a special ban parent request is sent to all nodes. This request does two things: 1) it prevents any tasks with the banned parent task from being started, and 2) it cancels all currently running tasks that have the banned task as a parent. The ban is lifted as soon as the coordinating node notifies all other nodes that the cancelled task has finished executing. If the coordinating node leaves the cluster before it has a chance to lift its bans, all bans set by this coordinating node are automatically removed.
As an option a task can elect to automatically cancel all child tasks if their parent task was running on a node that just left the cluster. This option makes sense for cancellable heavy tasks that have no side-effects and only return results to the coordinating node. With the coordinating node gone, it doesn't make sense to run such tasks any longer since their results will be most likely discarded.
The cat API previously used the Content-Type header field for
determining the media type of the response. This is in opposition to the
HTTP spec which specifies the Accept header field for this purpose. This
commit replaces the use of the Content-Type header field with the Accept
header field in the cat API.
Closes#14421
This processor is useful when all elements of a json array need to be processed in the same way.
This avoids that a processor needs to be defined for each element in an array.
Also it is very likely that it is unknown how many elements are inside an json array.
Retrieving distributed DF for TermVectors is beside it's esotheric justification
a very slow process and can cause serious load on the cluster. We also don't have nearly
enough testing for this stuff and given the complexity we should remove it rather than carrying it
around.
When there is an exception thrown during pipeline creation within
Rest calls (in put pipeline, and simulate) We now return a structured
error response to the user with details around which processor's
configuration is the cause of the issue, or which configuration property
is misconfigured, etc.
We used to have a disabled test around cluster put settings as it left cluster settings behind without a way to remove them. That has been in fixed in the cluster put settings api, so the test can be re-enabled.
The search_after parameter provides a way to efficiently paginate from one page to the next. This parameter accepts an array of sort values, those values are then used by the searcher to sort the top hits from the first document that is greater to the sort values.
This parameter must be used in conjunction with the sort parameter, it must contain exactly the same number of values than the number of fields to sort on.
NOTE: A field with one unique value per document should be used as the last element of the sort specification. Otherwise the sort order for documents that have the same sort values would be undefined. The recommended way is to use the field `_uuid` which is certain to contain one unique value for each document.
Fixes#8192
Merge feature/ingest branch into master branch.
This adds the ingest feature to ES that allows to preprocess document before indexing on an ingest node.
By default a node is an ingest node. Documents are preprocessed via a pipeline. A pipeline consists
out of one or more processors Each processor makes one or more modifications to a document processed.
There are many types of processors available out-of-the-box that are designed to make a specific change to a document being processed. In a cluster many pipeline can be configured via dedicated pipeline APIs. An new option on the bulk
and index APIs allows to control what pipeline is picked for preprocessing. If no pipeline is specified then the ingest
feature is skipped and no preprocessing takes place.
Site plugins used to be used for things like kibana and marvel, but
there is no longer a need since kibana (and marvel as a kibana plugin)
uses node.js. This change removes site plugins, as well as the flag for
jvm plugins. Now all plugins are jvm plugins.
This change affects get alias, get aliases as well as cat aliases. They all return closed indices too by default. get alias and get aliases also allow to return open indices only through the `expand_wildcards` option (set it to `open`).
Closes#14982
Warmers are now barely useful and will be removed in 3.0. Note that this only
removes the warmer API and query-based warmers. We still have warmers internally
for eg. global ordinals.
Close#15607
* Added percolator field mapper that extracts the query terms and indexes these terms with the percolator query.
* At percolate time these extracted terms are used to query percolator queries that are like to be evaluated. This can significantly cut down the time it takes to percolate. Whereas before all percolator queries were evaluated if they matches with the document being percolated.
* Changes made to percolator queries are no longer immediately visible, a refresh needs to happen before the changes are visible.
* By default the percolate api only returns upto 10 matches instead of returning all matching percolator queries.
* Made percolate more modular, so that it is easier to add unit tests.
* Added unit tests for the percolator.
Closes#12664Closes#13646
Adds task manager class and enables all activities to register with the task manager. Currently, the immutable Transport*Activity class represents activity itself shared across all requests. This PR adds and an additional structure Task that keeps track of currently running requests and can be used to communicate with these requests using TransportTaskAction.
Related to #15117
This adds the required changes/checks so that the build can run on
FreeBSD.
There are a few things that differ between FreeBSD and Linux:
- CPU probes return -1 for CPU usage
- `hot_threads` cannot be supported on FreeBSD
From OpenJDK's `os_bsd.cpp`:
```c++
bool os::is_thread_cpu_time_supported() {
#ifdef __APPLE__
return true;
#else
return false;
#endif
}
```
So this API now returns (for each FreeBSD node):
```
curl -s localhost:9200/_nodes/hot_threads
::: {Devil Hunter Gabriel}{q8OJnKCcQS6EB9fygU4R4g}{127.0.0.1}{127.0.0.1:9300}
hot_threads is not supported on FreeBSD
```
- multicast fails in native `join` method - known bug:
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=193246
Which causes:
```
1> Caused by: java.net.SocketException: Invalid argument
1> at java.net.PlainDatagramSocketImpl.join(Native Method)
1> at java.net.AbstractPlainDatagramSocketImpl.join(AbstractPlainDatagramSocketImpl.java:179)
1> at java.net.MulticastSocket.joinGroup(MulticastSocket.java:323)
1> at org.elasticsearch.plugin.discovery.multicast.MulticastChannel$Plain.buildMulticastSocket(MulticastChannel.java:309)
```
So these tests are skipped on FreeBSD.
Resolves#15562
Do not to load fields from _source when using the `fields` option.
Non stored (non existing) fields are ignored by the fields visitor when using the `fields` option.
Fixes#10783
Support * wildcard to retrieve stored fields when using the `fields` option.
Supported pattern styles are "xxx*", "*xxx", "*xxx*" and "xxx*yyy".
This adds support for arbitrary headers sent with each REST request, it
will allow us to test things like different xcontent-encoding (see
50_with_headers.yaml for what this looks like).
Headers are specified at the same level as `catch`, so a request would
look like:
```yaml
- do:
headers:
Content-Type: application/yaml
get:
index: test_1
type: _all
id: 1
```
This commit fixes a test bug in the cat shards REST test. In
particular, there was a race condition in the test that would cause the
test to sometimes fail. The race condition is that some of the shards
would go to state STARTED after the sync flush was issued. These shards
would (correctly) show up in the output as having state started but
without a sync_id. However, the expected output was written to only
look for shards that have state STARTED and a sync_id, or shards that
are still INITIALIZING or are UNASSIGNED and (of course) do not have a
sync_id. The best approach here is to just simplify the test.
The completion suggester provides auto-complete/search-as-you-type functionality.
This is a navigational feature to guide users to relevant results as they are typing, improving search precision.
It is not meant for spell correction or did-you-mean functionality like the term or phrase suggesters.
The completions are indexed as a weighted FST (finite state transducer) to provide fast Top N prefix-based
searches suitable for serving relevant results as a user types.
closes#10746
Similarly to what we did with the search api, we can now also move query parsing on the coordinating node for the validate query api. Given that the explain api is a single shard operation (compared to search which is instead a broadcast operation), this doesn't change a lot in how the api works internally. The main benefit is that we can simplify the java api by requiring a structured query object to be provided rather than a bytes array that will get parsed on the data node. Previously if you specified a QueryBuilder it would be serialized in json format and would get reparsed on the data node, while now it doesn't go through parsing anymore (as expected), given that after the query-refactoring we are able to properly stream queries natively. Note that the WrapperQueryBuilder can be used from the java api to provide a query as a string, in that case the actual parsing of the inner query will happen on the data node.
Relates to #10217Closes#14384
This change removes the leftover pom files. A couple files were left for
reference, namely in qa tests that have not yet been migrated (vagrant
and multinode). The deb and rpm assemblies also still exist for
reference when finishing their setup in gradle.
See #13930
We have two types of parse methods for queries: one for the inner query, to be used once the parser is positioned within the query element, and one for the whole query source, including the query element that wraps the actual query.
With the search refactoring we ended up using the former in count, cat count and delete by query, whereas we should have used the former. It ends up working properly given that we have a registered (deprecated) query called "query", which used to allow to wrap a filter into a query, but this has the following downsides:
1) prevents us from removing the deprecated "query" query
2) we end up supporting a top level query that is not wrapped within a query element (pre 1.0 syntax iirc that shouldn't be supported anymore)
This commit finally removes the "query" query and fixes the related parsing bugs. We also had some tests that were providing queries in the wrong format, those have been fixed too.
Closes#13326Closes#14304
This adds an API for force merging lucene segments. The `/_optimize` API is now
deprecated and replaced by the `/_forcemerge` API, which has all the same flags
and action, just a different name.
Currently it's not possible to specify a timeout for nodes operations (such as node info, node stats, cluster stats and hot threads) via REST-based APIs.
The `_create` API is handy way to specify an index operation should only be done if the document doesn't exist. This is currently implemented in explicit code paths all the way down to the engine. However, conceptually this is no different than any other versioned operation - instead of requiring a document is on a specific version, we require it to be deleted (or non-existent). This PR removes Engine.Create in favor of a slight extension in the VersionType logic.
There are however a couple of side effects:
- DocumentAlreadyExistsException is removed and VersionConflictException is used instead (with an improved error message)
- Update will reject version parameters if the upsert option is used (it doesn't compute anyway).
- Translog.Create is also removed infavor of Translog.Index (that's OK because their binary format was the same, so we can just read Translog.Index of the translog file)
Closes#13955
This commit removes all the opaque bytes for extra_source and template_source.
Instead source and extra_source etc. are represented as SearchSourceBuilder which can
in-place be modified and is updated with the content of the request parameters.
Template Source is parsed and evaluated which in-turn replaces the actual source.
We moved a lot of repositories into elasticsearch, but in their new
location they retained their LICENSE.txt and NOTICE.txt files. These are
all the same, and having the license and notice and the root of the
repository should be sufficient.
Adds a node attribute to all test runs and uses the attribute to test
`_cat/nodeattrs`.
Note that its quite possible create an impressively slow regex while doing
this and you have to be careful. See comment in commit for more if curious.
Closes#12558
detect_noop is pretty cheap and noop updates compartively expensive so this
feels like a sensible default.
Also had to do some testing and documentation around how _ttl works with
detect_noop.
Closes#11282
In order to have consistent deploys across several repositories,
we should deploy to sonatype first, then mirror those contents,
and then upload to s3.
This means, the aws wagon is not needed anymore.
Adds an explicit description the RPM package so it doesn't inherit the description from the POM.
Closes#12550
Also, modified descriptions for deb and rpm packages to be the same and to reference the documentation rather than listing features that are out of date.
Conflicting mappings that were allowed before v2.0 can cause runaway shard failures on upgrade. This commit adds a check that prevents a cluster from starting if it contains such indices as well as restoring such indices from a snapshot into already running cluster.
Closes#11857
this change was added recently which uses default timezone for the creation
date on CAT endpoints. We should be consistent and use UTC across the board.
This commit adds #getDefaultTimzone() to forbidden API and fixes the REST tests.
Relates to #11688
Store information reports on which nodes shard copies exist, the shard
copy version, indicating how recent they are, and any exceptions
encountered while opening the shard index or from earlier engine failure.
closes#10952
Today we have a intermediate hierarchy for shard and index exceptions
which makes it hard to introduce generic exceptions like ResourceNotFoundException
intoduced in this commit. This commit breaks up the hierarchy by adding index and shard
as a special internal header that gets rendered for every exception that fills that header.
This commit removes dedicated exceptions like `IndexMissingException` or
`IndexShardMissingException` in favour of `ResourceNotFoundException`
This test asserts that the first recovery result is of type SNAPSHOT.
This assertion might not be true depending on the rendering order if
a replica is recovered quick enough. This commit disables replicas and
their recovery since it's not the purpose of this test.
Today everything is tight to having the next version as the latest.
In order to work towards 2.0.0.beta1 we need to fix all the usage of
2.0.0-SNAPSHOT to reflect the version we will release soon.
Usually we do this on the release branch but to simplify things I wanna
keep this on master for now and move to 2.1.0-SNAPSHOT on master once
we created a 2.0 branch.
Closes#12148
This information was stored with the snapshot but wasn't available on the interface. Knowing the version of elasticsearch that created the snapshot can be useful to determine the minimal version of the cluster that is required in order to restore this snapshot.
Closes#11980
This commit adds support to retrieve fields when using the bulk update API. This functionality was previously available for the update API
but not for the bulk update API.
Closes#11527
Field stats index constraints allows to omit all field stats for indices that don't match with the constraint. An index
constraint can exclude indices' field stats based on the `min_value` and `max_value` statistic. This option is only
useful if the `level` option is set to `indices`.
For example index constraints can be useful to find out the min and max value of a particular property of your data in
a time based scenario. The following request only returns field stats for the `answer_count` property for indices
holding questions created in the year 2014:
curl -XPOST 'http://localhost:9200/_field_stats?level=indices' -d '{
"fields" : ["answer_count"] <1>
"index_constraints" : { <2>
"creation_date" : { <3>
"min_value" : { <4>
"gte" : "2014-01-01T00:00:00.000Z",
},
"max_value" : {
"lt" : "2015-01-01T00:00:00.000Z"
}
}
}
}'
Closes#11187
In order to be more consistent with what they do, the query cache has been
renamed to request cache and the filter cache has been renamed to query
cache.
A known issue is that package/logger names do no longer match settings names,
please speak up if you think this is an issue.
Here are the settings for which I kept backward compatibility. Note that they
are a bit different from what was discussed on #11569 but putting `cache` before
the name of what is cached has the benefit of making these settings consistent
with the fielddata cache whose size is configured by
`indices.fielddata.cache.size`:
* index.cache.query.enable -> index.requests.cache.enable
* indices.cache.query.size -> indices.requests.cache.size
* indices.cache.filter.size -> indices.queries.cache.size
Close#11569
This commit consolidates several abstractions on the shard level in
ordinary classes not managed by the shard level guice injector.
Several classes have been collapsed into IndexShard and IndexShardGatewayService
was cleaned up to be more lightweight and self-contained. It has also been moved into
the index.shard package and it's operation is renamed from recovery from "gateway" to recovery
from "store" or "shard_store".
Closes#11847
This is a follow up to #8143 and #6730 for _timestamp. It removes
support for `path`, as well as any field type settings, and
enables docvalues for _timestamp, for 2.0. Users who need to
adjust these settings can use a date field.
In order to get a quick overview using by simply checking the cluster state
and its corresponding cat API, the following two attributes have been added
to the cluster health response:
* task max waiting time, the time value of the first task of the
queue and how long it has been waiting
* active shards percent: The percentage of the number of shards that are in
initializing state
This makes the cluster health API handy to check, when a fully restarted
cluster is back up and running.
Closes#10805
The change makes rest-spec-api a project in the same way as we build dev-tools. it packages the tests and api in a bundle using the maven-remote-resources-plugin and uses the same plugin in the plugins and core pom to unpack the rest-api-spec into the target directory and references the rest tests there in the test resources.
The main stimulus for this change is that for those using Eclipse the current build does not work. After running `mvn eclipse:eclipse` the Eclipse IDE errors because the rest-api-spec is outside of the project scope, meaning that every time the command is run (required whenever any dependencies change), the class path of all the projects has to be manually fixed.
By setting human parameter to true, it's now possible to see human readable versions of Elasticsearch that created and updated the index as well as the date when the index was created.
Closes#11484
This changes the parameter name `ignore_like` to the more user friendly name
`unlike`. This later feature generates a query from the terms in `A` but not
from the terms in `B`. This translates to a result set which is like `A` but
unlike `B`. We could have further negatively boosted any documents that have
some `B`, but these documents already do not receive any contribution from
having `B`, and would therefore negatively compete with documents having `A`.
Closes#11117
Unassigned meta includes additional information as to why a shard is unassigned, this is especially handy when a shard moves to unassigned due to node leaving or shard failure.
The additional data is provided as part of the cluster state, and as part of `_cat/shards` API.
The additional meta includes the timestamp that the shard has moved to unassigned, allowing us in the future to build functionality such as delay allocation due to node leaving until a copy of the shard is found.
closes#11653
We have to make sure all shards are started to know the synced flush will hit them all. Shards that are still initializing during the sync flush may be missed and confuse the stats call
Some of our meta fields (such as _id, _version, ...) are returned as top-level
properties of the json document, while other properties (_timestamp, _routing,
...) are returned under `fields`. This commit makes all meta fields returned
as top-level properties.
So eg. `GET test/test/1?fields=_timestamp,foo` would now return
```json
{
"_index": "test",
"_type": "test",
"_id": "1",
"_version": 1,
"_timestamp": 10000000,
"found": true,
"fields": {
"foo": [ "bar" ]
}
}
```
while it used to return
```json
{
"_index": "test",
"_type": "test",
"_id": "1",
"_version": 1,
"found": true,
"fields": {
"_timestamp": 10000000,
"foo": [ "bar" ]
}
}
```
The wildcard cat API REST tests relied on bulk.max and bulk.min in
the thread_pool response. However due to the thread pool types being
randomized in InternalTestCluster, the min/max values were not guaranteed
to exist (the cached thread pool type is unbounded and thus does not have a
max value).
In order to prevent this, the test has been removed and now the cat
nodes test is used for wildcard testing, which always returns stats
about the heap.
The tests for the recently added added wildcard feature were
relying on order of the hashmap being used, which could be
different.
The implementation now ensures, that the header fields are
parsed in the order they have been added.
This change adds a new "filter_path" parameter that can be used to filter and reduce the responses returned by the REST API of elasticsearch.
For example, returning only the shards that failed to be optimized:
```
curl -XPOST 'localhost:9200/beer/_optimize?filter_path=_shards.failed'
{"_shards":{"failed":0}}%
```
It supports multiple filters (separated by a comma):
```
curl -XGET 'localhost:9200/_mapping?pretty&filter_path=*.mappings.*.properties.name,*.mappings.*.properties.title'
```
It also supports the YAML response format. Here it returns only the `_id` field of a newly indexed document:
```
curl -XPOST 'localhost:9200/library/book?filter_path=_id' -d '---hello:\n world: 1\n'
---
_id: "AU0j64-b-stVfkvus5-A"
```
It also supports wildcards. Here it returns only the host name of every nodes in the cluster:
```
curl -XGET 'http://localhost:9200/_nodes/stats?filter_path=nodes.*.host*'
{"nodes":{"lvJHed8uQQu4brS-SXKsNA":{"host":"portable"}}}
```
And "**" can be used to include sub fields without knowing the exact path. Here it returns only the Lucene version of every segment:
```
curl 'http://localhost:9200/_segments?pretty&filter_path=indices.**.version'
{
"indices" : {
"beer" : {
"shards" : {
"0" : [ {
"segments" : {
"_0" : {
"version" : "5.2.0"
},
"_1" : {
"version" : "5.2.0"
}
}
} ]
}
}
}
}
```
Note that elasticsearch sometimes returns directly the raw value of a field, like the _source field. If you want to filter _source fields, you should consider combining the already existing _source parameter (see Get API for more details) with the filter_path parameter like this:
```
curl -XGET 'localhost:9200/_search?pretty&filter_path=hits.hits._source&_source=title'
{
"hits" : {
"hits" : [ {
"_source":{"title":"Book #2"}
}, {
"_source":{"title":"Book #1"}
}, {
"_source":{"title":"Book #3"}
} ]
}
}
```
#10032 introduced the notion of sealing an index by marking it with a special read only marker, allowing for a couple of optimization to happen. The most important one was to speed up recoveries of shards where we know nothing has changed since they were online by skipping the file based sync phase. During the implementation we came up with a light notion which achieves the same recovery benefits but without the read only aspects which we dubbed synced flush. The fact that it was light weight and didn't put the index in read only mode, allowed us to do it automatically in the background which has great advantage. However we also felt the need to allow users to manually trigger this operation.
The implementation at #11179 added the sync flush internal logic and the manual (rest) rest API. The name of the API was modeled after the sealing terminology which may end up being confusing. This commit changes the API name to match the internal synced flush naming, namely `{index}/_flush/synced'.
On top of that it contains a couple other changes:
- Remove all java client API. This feature is not supposed to be called programtically by applications but rather by admins.
- Improve rest responses making structure similar to other (flush) API
- Change IndexShard#getOperationsCount to exclude the internal +1 on open shard . it's confusing to get 1 while there are actually no ongoing operations
- Some minor other clean ups
Mappings conflicts should not be ignored. If I read the history correctly, this
option was added when a mapping update to an existing field was considered a
conflict, even if the new mapping was exactly the same. Now that mapping updates
are smart enough to detect conflicting options, we don't need an option to
ignore conflicts.
There currently are small differences between search api and count, exists, validate query, explain api when it comes to reading query_string parameters. `analyze_wildcard`, `lowercase_expanded_terms` and `lenient` are only read by the search api and ignored by all other mentioned apis. Unified code to fix this and make sure it doesn't happen again. Also shared some code when it comes to printing out the query as part of SearchSourceBuilder conversion to ToXContent.
Extended REST spec to include all the supported params (some that were already supported weren't listed), and added REST tests (also some basic tests for count and search_exists which weren't tested at all).
Closes#11057
Removes the More Like This API, users should now use the More Like This query.
The MLT API tests were converted to their query equivalent. Also some clean
ups in MLT tests.
Closes#10736Closes#11003
This removes Elasticsearch's filter cache and uses Lucene's instead. It has some
implications:
- custom cache keys (`_cache_key`) are unsupported
- decisions are made internally and can't be overridden by users ('_cache`)
- not only filters can be cached but also all queries that do not need scores
- parent/child queries can now be cached, however cached entries are only
valid for the current top-level reader so in practice it will likely only
be used on read-only indices
- the cache deduplicates filters, which plays nicer with large keys (eg. `terms`)
- better stats: we already had ram usage and evictions, but now also hit count,
miss count, lookup count, number of cached doc id sets and current number of
doc id sets in the cache
- dynamically changing the filter cache size is not supported anymore
Internally, an important change is that it removes the NoCacheFilter infrastructure
in favour of making Query.rewrite specializing the query for the current reader so
that it will only be cached on this reader (look for IndexCacheableQuery).
Note that consuming filters with the query API (createWeight/scorer) instead of
the filter API (getDocIdSet) is important for parent/child queries because
otherwise a QueryWrapperFilter(ParentQuery) would run the wrapped query per
segment while relations might be cross segments.
Remove the ability to specify search type ‘query_and_fetch’ and
‘df_query_and_fetch’ from the REST API.
- Adds REST tests
- Updates REST API spec to remove ‘query_and_fetch’ and
‘df_query_and_fetch’ as options
- Removes documentation for these options
Closes#9606
The current implementation is dangerous: it unexpectedly refreshes,
which can quickly cause an unhealthy index (segment explosion). It
can also delete different documents on primary vs replicas, causing
inconsistent replicas.
For 2.0 we will replace this with an optional plugin that does a
scan/scroll search and then issues bulk delete requests.
Closes#10859
This commit adds support for structural errors / failures / exceptions
on the elasticsearch REST layer. Exceptions are rendering with at least
a `type` and a `reason` corresponding to the exception name and the message.
Some expcetions like the ones associated with an index or a shard will have
additional information about the index the exception was triggered on or the
shard respectivly.
Each rendered response will also contain a list of root causes which is a list
of distinct shard level errors returned for the request. Root causes are the lowest
level elasticsearch exception found per shard response and are intended to be displayed
to the user to indicate the soruce of the exception.
Shard level response are by-default grouped by their type and reason to reduce the amount
of duplicates retunred. Yet, the same exception retunred from different indices will not be
grouped.
Closes#3303
This commit splits the current ClusterBlockLevel.METADATA into two disctins ClusterBlockLevel.METADATA_READ and ClusterBlockLevel.METADATA_WRITE blocks. It allows to make a distinction between
an operation that modifies the index or cluster metadata and an operation that does not change any metadata.
Before this commit, many operations where blocked when the cluster was read-only: Cluster Stats, Get Mappings, Get Snapshot, Get Index Settings, etc. Now those operations are allowed even when
the cluster or the index is read-only.
Related to #8102, #2833Closes#3703Closes#5855Closes#10521Closes#10522
The field stats api returns field level statistics such as lowest, highest values and number of documents that have at least one value for a field.
An api like this can be useful to explore a data set you don't know much about. For example you can figure at with the lowest and highest response times are, so that you can create a histogram or range aggregation with sane settings.
This api doesn't run a search to figure this statistics out, but rather use the Lucene index look these statics up (using Terms class in Lucene). So finding out these stats for fields is cheap and quick.
The min/max values are based on the type of the field. So for a numeric field min/max are numbers and date field the min/max date and other fields the min/max are term based.
Closes#10523
Also changed the stash logger to not log all stashed values under debug (it does trace now) but do dump the stash content upon failure (under info as a XContent)
Extends ShardStats with commit specific information. We currently expose commit id, generation and the user data map.
The information is also retrievable via the Rest API by using `GET _stats?level=shards`
Closes#10687
In Lucene 5.1 lots of filters got deprecated in favour of equivalent queries.
Additionally, random-access to filters is now replaced with approximations on
scorers. This commit
- replaces the deprecated NumericRangeFilter, PrefixFilter, TermFilter and
TermsFilter with NumericRangeQuery, PrefixQuery, TermQuery and TermsQuery,
wrapped in a QueryWrapperFilter
- replaces XBooleanFilter, AndFilter and OrFilter with a BooleanQuery in a
QueryWrapperFilter
- removes DocIdSets.isBroken: the new two-phase iteration API will now help
execute slow filters efficiently
- replaces FilterCachingPolicy with QueryCachingPolicy
Close#8960
This commit changes dynamic mappings updates so that they are synchronous on the
entire cluster and their validity is checked by the master node. There are some
important consequences of this commit:
- a failing index request on a non-existing type does not implicitely create
the type anymore
- dynamic mappings updates cannot create inconsistent mappings on different
shards
- indexing requests that introduce new fields might induce latency spikes
because of the overhead to update the mappings on the master node
Close#8688
This option defaults to false, because it is also important to upgrade
the "merely old" segments since many Lucene improvements happen within
minor releases.
But you can pass true to do the minimal work necessary to upgrade to
the next major Elasticsearch release.
The HTTP GET upgrade request now also breaks out how many bytes of
ancient segments need upgrading.
Closes#10213Closes#10540
Conflicts:
dev-tools/create_bwc_index.py
rest-api-spec/api/indices.upgrade.json
src/main/java/org/elasticsearch/action/admin/indices/optimize/OptimizeRequest.java
src/main/java/org/elasticsearch/action/admin/indices/optimize/ShardOptimizeRequest.java
src/main/java/org/elasticsearch/action/admin/indices/optimize/TransportOptimizeAction.java
src/main/java/org/elasticsearch/index/engine/InternalEngine.java
src/test/java/org/elasticsearch/bwcompat/StaticIndexBackwardCompatibilityTest.java
src/test/java/org/elasticsearch/index/engine/InternalEngineTests.java
src/test/java/org/elasticsearch/rest/action/admin/indices/upgrade/UpgradeReallyOldIndexTest.java
For bacwards compatibility reasons routing_nodes were previously printed out when routing_table was requested, together with the actual routing_table. Now they are printed out only when requests through `routing_nodes` flag.
Relates to #10412Closes#10486
Cluster state api returns both routing_table and routing_nodes sections whenever routing_table is requested. That is pretty much the same info, just grouped differently. This commit allows to differentiate between the two. Yet, routing_table still returns both for bw comp reasons.
Closes#10352Closes#10412
Align get indexed scripts and get search template apis to our get api, which returns a response body when the document is not found, with a found boolean flag. Also, return metadata info all the time too.
Closes#7325Closes#10396
RoutingTables activePrimaryShardsGrouped(), allActiveShardsGrouped() and
allAssignedShardsGrouped() methods treated empty index array input
parameters as meaning "all" indices and expanded to the routing maps
keyset. However, the expansion of index names is now already done in
MetaData#concreteIndices(). Returning an empty index name list here
when a wildcard pattern didn't match any index name could lead to
problems like #9081 because the RoutingTable still expanded this
list of names to "_all". In case of e.g. the recovery endpoint this
could lead to problems.
Closes#9081Closes#10148
This commit brings the benefits of the `count` search type to search requests
that have a `size` of 0:
- a single round-trip to shards (no fetch phase)
- ability to use the query cache
Since `count` now provides no benefits over `query_then_fetch`, it has been
deprecated.
Close#7630
Deleting a type from an index is inherently dangerous because
the type can be recreated with new mappings which may conflict
with existing segments still using the old mappings. This
removes the ability to delete a type (similar to how deleting
fields within a type is not allowed, for the same reason).
closes#8877closes#10231
This commit adds the current total number of translog operations to the recovery reporting API. We also expose the recovered / total percentage:
```
"translog": {
"recovered": 536,
"total": 986,
"percent": "54.3%",
"total_time": "2ms",
"total_time_in_millis": 2
},
```
Closes#9368Closes#10042
The analysis chain should be used instead of relying on this, as it is
confusing when dealing with different per-field analysers.
The `locale` option was only used for `lowercase_expanded_terms`, which,
once removed, is no longer needed, so it was removed as well.
Fixes#9978
Relates to #9973
We've been relying on URI for url encoding, but it turns out it has some problems. For instance '+' stays as is while it should be encoded to `%2B`. If we go and manually encode query params we have to be careful though not to run into double encoding ('+'=>'%2B'=>'%252B'). The applied solution relies on URI encoding for the url path, but manual url encoding for the query parameters. We prevent URI from double encoding query params by using its single argument constructor that leaves everything as is.
We can also revert back the expression script REST test that revealed this to its original content (which contains an addition).
Closes#9769Closes#9946
While the parser allowed changing field type settings, these would never
have been serialized. So this change simply removes parsing using
parseField. Backcompat will still work if a user uploads old settings
(they just would never have worked anyways, so we continue ignoring
them with 1.x, and 2.x will now error).
see #8143closes#9914
To support the `_recovery` API, the recovery process keeps track of current progress in a class called RecoveryState. This class currently have some issues, mostly around concurrency (see #6644 ). This PR cleans it up as well as other issues around it:
- Make the Index subsection API cleaner:
- remove redundant information - all calculation is done based on the underlying file map
- clearer definition of what is what: total files, vs reused files (local files that match the source) vs recovered files (copied over). % based progress is reported based on recovered files only.
- cleaned up json response to match other API (sadly this breaks the structure). We now properly report human values for dates and other units.
- Add more robust unit testing
- Detail flag was passed along as state (it's now a ToXContent param)
- State lookup during reporting is now always done via the IndexShard , no more fall backs to many other classes.
- Cleanup APIs around time and move the little computations to the state class as opposed to doing them out of the API
I also improved error messages out of the REST testing infra for things I run into.
Closes#6644Closes#9811
The number of current pending tasks is useful to detect and overloaded master. This commit adds it to the cluster health API. The complete list can be retrieved from the dedicated pending tasks API.
It also adds rest tests for the cluster health variants.
Closes#9877
The plus sign is not treated correctly in encoding and can lead
to problems, if the search request is encoded as HTTP parameter
instead of the HTTP body.
Relates #9769
Using '_cat/segments' or the indices segments api without matching any index
now returns empty result instead of throwing IndexMissingException.
Closes#9219
Whenever we have an api that supports GET with a body, we always support the POST method too, as well as providing the body as a query_string parameter called `source`. Our REST spec should reflect this convention. FIxed them and introduced a hard check at parse time in our Java REST tests runner, which will cause the tests to fail if spec are not compliant.
Closes#9629
The percolate api doesn't parse the encoded body provided as `source` query string parameter, when percolating an existing document. Fixed and added REST test that would have caught this since we randomly use GET + encoded `source` param instead of GET + request body in our java runner (the perl runner does the same too).
Closes#9628
The `full` option and `FlushType.NEW_WRITER` only exists to allow
realtime changes to two settings (`index.codec` and `index.concurrency`).
Those settings are very expert and don't really need to be updateable
in realtime.
Until now, there was no possibility to expose infos about configured
transport profiles. This commit adds the ability to expose those
information in the TransportInfo class.
The channel was well as the netty pipeline handler now also contain
the profile they were configured for, as this information cannot be
extracted elsewhere.
In addition, each profile now can set its own publish host and port,
which might be needed in case of portforwarding or using docker.
Closes#9134
Adding missing support for the multi-index query parameters 'ignore_unavailable',
'allow_no_indices' and 'expand_wildcards' to '_cluster/state' API. These
parameters are supposed to be supported for APIs that work across multiple indices.
So far overwriting the default settings per REST call was not possible which is
fixed here.
Closes#5229Closes#9295
Today we give the HTTP status back within the HTTP response itself and within the JSON response as well:
```sh
curl localhost:9200/
```
```js
{
"status" : 200,
"name" : "Red Wolf",
"version" : {
"number" : "2.0.0",
"build_hash" : "6837a61d8a646a2ac7dc8da1ab3c4ab85d60882d",
"build_timestamp" : "2014-08-19T13:55:56Z",
"build_snapshot" : true,
"lucene_version" : "4.9"
},
"tagline" : "You Know, for Search"
}
```
The header indicates to how many shard copies (primary and replicas shards) a write was supposed to go to, to how many
shard copies to write succeeded and potentially captures shard failures if writing into a replica shard fails.
For async writes it also includes the number of shards a write is still pending.
Closes#7994
This fix ensures that calls to the GET alias/mappings/settings/warmers APIs return the aliases/mappings/settings/warmers object even if there is no content within them.. This make them consistent with the GET Index API docs and the breaking changes in 1.4 docs
Closes#9148
Add a new ignore_idle_threads boolean option (default true) to
/_nodes/hot_threads, to filter out threads in known idle places like
waiting on a socket select or on pulling the next task from an empty
queue.
Closes#8985Closes#8908
This commit adds support for version and version_type to the Term Vectors API.
This could be useful in the following case whereby the user gets a document
and later wants to generate its TVs. With version, this would ensure that only
the TVs of that particular document are generated, and error out if the
document has been updated in between.
Closes#7480
Adds a `ignore_like` parameter to the MLT Query, which simply tells the
algorithm to skip all the terms from the given documents. This could be useful
in order to better guide nearest neighbor search by telling the algorithm to
never explore the space spanned by the given `ignore_like` docs. In essence we
are interested about the characteristic of a given item, but not of the ones
provided by `ignore_like`, thereby forcing the algorithm to go deeper in its
selection of terms. Note that this is different than simply performing a must
not boolean query on the unliked items. The syntax is exactly the same as the
`like` parameter.
Closes#8674
Today, Elasticsearch has a separate merge thread pool checking once
per second (by default) if any merges are necessary, but this is no
longer necessary since we can and do now tell Lucene's
ConcurrentMergeScheduler never to "hard pause" threads when merges
fall behind, since we do our own index throttling.
This change goes back to letting Lucene launch merges as needed, and
removes these two expert settings:
index.merge.force_async_merge
index.merge.async_interval
Now merges kick off immediately instead of waiting up to 1 second
before running.
Closes#8643
We speak of the term vectors of a document, where each field has an associated
stored term vector. Since by default we are requesting all the term vectors of
a document, the HTTP request endpoint should rather be called `_termvectors`
instead of `_termvector`. The usage of `_termvector` is now deprecated, as
well as the transport client call to termVector and prepareTermVector.
Closes#8484
Fixed behaviour where two representations of the default index analyzer weren't being treated as equivalent. Added REST test to confirm fix.
Closes#2716
If a shard (e.g. replica) gets initialized after we indexed the document it gets refreshed internally and we find the doc and its term_vectors, thus the test fails
We currently use the djb2 hash function in order to compute the shard a
document should go to. Unfortunately this hash function is not very
sophisticated and you can sometimes hit adversarial cases, such as numeric ids
on 33 shards.
Murmur3 generates hashes with a better distribution, which should avoid the
adversarial cases.
Here are some examples of how 100000 incremental ids are distributed to shards
using either djb2 or murmur3.
5 shards:
Murmur3: [19933, 19964, 19940, 20030, 20133]
DJB: [20000, 20000, 20000, 20000, 20000]
3 shards:
Murmur3: [33185, 33347, 33468]
DJB: [30100, 30000, 39900]
33 shards:
Murmur3: [2999, 3096, 2930, 2986, 3070, 3093, 3023, 3052, 3112, 2940, 3036, 2985, 3031, 3048, 3127, 2961, 2901, 3105, 3041, 3130, 3013, 3035, 3031, 3019, 3008, 3022, 3111, 3086, 3016, 2996, 3075, 2945, 2977]
DJB: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 900, 900, 900, 900, 1000, 1000, 10000, 10000, 10000, 10000, 9100, 9100, 9100, 9100, 9000, 9000, 0, 0, 0, 0, 0, 0]
Even if djb2 looks ideal in some cases (5 shards), the fact that the
distribution of its hashes has some patterns can raise issues with some shard
counts (eg. 3, or even worse 33).
Some tests have been modified because they relied on implementation details of
the routing hash function.
Close#7954
Fixes a bug where alias creation would allow `null` for index name, which thereby
applied the alias to _all_ indices. This patch makes the validator throw an
exception if the index is null.
```bash
POST /_aliases
{
"actions": [
{
"add": {
"alias": "empty-alias",
"index": null
}
}
]
}
```
```json
{
"error": "ActionRequestValidationException[Validation Failed: 1: Alias action [add]: [index] may not be null;]",
"status": 400
}
```
The reason this bug wasn't caught by the existing tests is because
the old test for nullness only validated against a cluster which had
zero indices. The null index is translated into "_all", and since
there are no indices, this fails because the index doesn't exist.
So the test passes.
However, as soon as you add an index, "_all" resolves and you get the
situation described in the original bug report: null index is
accepted by the alias, resolves to "_all" and gets applied to everything.
The REST tests, otoh, explicitly tested this bug as a real feature and therefore
passed. The REST tests were modified to change this behavior.
Fixes#7863
Add source_node and target_node fields to the recovery cat API. Also fixed and updated the documentation which was not complete concerning fields names.
Closes#8041
Storing `_timestamp` by default means that under the default configuration, you
would have all the information you need in order to reindex into a different
index.
Close#8139
cat/nodes currently does not report any details related to file descriptors. This adds the current number in use, the maximum number available as well as their ratio (percentage) to cat/nodes as hidden-by-default metrics. In addition, this also adds current heap usage (as a non-percentage of ts max) and ram usage (as a non-percerntage of its max) to allow tools to provide more granularity.
Closes#7652
* `get_upgrade` => `GET _upgrade` -- Return the status
* `upgrade` => `POST _upgrade` -- Perform the operation
Original specification part of c021f22523.
Related: #7884, #7922
This commit does the following:
* Add the new API at the rest layer, being backed by the optimize API
with upgrade flag, and segments api to find upgrade status.
* Add `upgrade` flag to optimize API, and deprecate `force` flag (will
remove in master)
* Add test for both synchronous and async upgrade
closes#7884closes#7922
When asking for `GET /_cat/indices?v`, you can now retrieve closed indices in addition to opened ones.
```
health status index pri rep docs.count docs.deleted store.size pri.store.size
yellow open .marvel-2014.05.21 1 1 8792 0 21.7mb 21.7mb
close test
yellow open .marvel-2014.05.22 1 1 3871 0 10.7mb 10.7mb
red open .marvel-2014.05.27 1 1
```
Closes#7907.
Closes#7936.
By default term vectors are now realtime, as opposed to previously near
realtime. If they are not found in the index, they will be generated on the
fly. The document is fetched from the transaction log and treated as an
artificial document. One can set `realtime` parameter to `false` in order to
disable this functionality. This consequently makes the MLT query realtime in
fetching documents, as it previsouly used to be before switching from using
the multi get API to the mtv API.
Closes#7846
Previously, the only way to specify a document not present in the index was to
use `like_text`. This would usually lead to complex queries made of multiple
MLT queries per document field. This commit adds the ability to the MLT query
to directly specify documents not present in the index (artificial documents).
The syntax is similar to the Percolator API or to the Multi Term Vector API.
Closes#7725
This contains several cleanups to the indexed scripts.
Remove the unused FetchSourceContext from the Get request..
Add lang,_version,_id to the REST GET API.
Removes the routing from GetIndexedScriptRequest since the script index is a single shard that is replicated across all nodes.
Fix backward compatible template file reference
Before 1.3.0 on disk scripts could be referenced by requesting
````
_search/template
{
"template" : "ondiskscript"
}
````
This was broken in 1.3.0 by requiring
````
{
"template" :
{
"file" : "ondiskscript"
}
}
````
This commit restores the previous behavior.
Remove support for preference, realtime and refresh
These parameters don't make sense anymore for indexed scripts as we always force the preference to _local and
always refresh after a Put to the indexed scripts index.
Closes#7568Closes#7559Closes#7647Closes#7567
Returns information about settings, aliases, warmers, and mappings. Basically returns the IndexMetadata. This new endpoint replaces the /{index}/_alias|_aliases|_mapping|_mappings|_settings|_warmer|_warmers and /_alias|_aliases|_mapping|_mappings|_settings|_warmer|_warmers endpoints whilst maintaining the same response formats. The only exception to this is on the /_alias|_aliases|_warmer|_warmers endpoint which will now return a section for 'aliases' or 'warmers' even if no aliases or warmers exist. This backwards compatibility change is documented in the reference docs.
Closes#4069
By default the reroute API should return the new cluster state, excluding the metadata. It was however it was wrongly using an old parameter (filter_metadata) and thus failed to do so. This commits restores but wiring it to the correct `metric` parameter. We also add an enum representing the possible metrics, to avoid similar future mistakes.
Closes#7520Closes#7523
The get, put and delete indexed script apis map to get, index and delete api and internally create those corresponding requests. We need to make sure that the original headers are handed over to the new request by passing the original request in the constructor when creating the new one.
Also streamlined the support for version and version_type in the REST layer since the parameters were not consistently parsed and set to the internal java API requests.
Modified the REST delete template and delete script actions to make use of a client instead of using the `ScriptService` directly.
Closes#7569