This PR adds deprecation warnings when accessing System Indices via the REST layer. At this time, these warnings are only enabled for Snapshot builds by default, to allow projects external to Elasticsearch additional time to adjust their access patterns. Deprecation warnings will be triggered by all REST requests which access registered System Indices, except for purpose-specific APIs which access System Indices as an implementation detail a few specific APIs which will continue to allow access to system indices by default: - `GET _cluster/health` - `GET {index}/_recovery` - `GET _cluster/allocation/explain` - `GET _cluster/state` - `POST _cluster/reroute` - `GET {index}/_stats` - `GET {index}/_segments` - `GET {index}/_shard_stores` - `GET _cat/[indices,aliases,health,recovery,shards,segments]` Deprecation warnings for accessing system indices take the form: ``` this request accesses system indices: [.some_system_index], but in a future major version, direct access to system indices will be prevented by default ```
This commit is contained in:
parent
3e548592b6
commit
5c8b0662df
|
@ -42,6 +42,14 @@ include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=expand-wildcards]
|
|||
|
||||
The defaults settings for the above parameters depend on the API being used.
|
||||
|
||||
Some indices (hereafter "system indices") are used by various system
|
||||
modules and/or plugins to store state or configuration. These indices
|
||||
are not intended to be accessed directly, and accessing them directly is
|
||||
deprecated. In the next major version, access to these indices will no longer be
|
||||
allowed to prevent accidental operations that may cause problems with
|
||||
Elasticsearch features which depend on the consistency of data in these
|
||||
indices.
|
||||
|
||||
Some multi-target APIs that can target indices also support the following query
|
||||
string parameter:
|
||||
|
||||
|
|
|
@ -53,13 +53,13 @@ POST /my-index-000001/_delete_by_query
|
|||
==== {api-description-title}
|
||||
|
||||
You can specify the query criteria in the request URI or the request body
|
||||
using the same syntax as the <<search-search,Search API>>.
|
||||
using the same syntax as the <<search-search,Search API>>.
|
||||
|
||||
When you submit a delete by query request, {es} gets a snapshot of the data stream or index
|
||||
when it begins processing the request and deletes matching documents using
|
||||
`internal` versioning. If a document changes between the time that the
|
||||
snapshot is taken and the delete operation is processed, it results in a version
|
||||
conflict and the delete operation fails.
|
||||
conflict and the delete operation fails.
|
||||
|
||||
NOTE: Documents with a version equal to 0 cannot be deleted using delete by
|
||||
query because `internal` versioning does not support 0 as a valid
|
||||
|
@ -70,18 +70,18 @@ requests sequentially to find all of the matching documents to delete. A bulk
|
|||
delete request is performed for each batch of matching documents. If a
|
||||
search or bulk request is rejected, the requests are retried up to 10 times, with
|
||||
exponential back off. If the maximum retry limit is reached, processing halts
|
||||
and all failed requests are returned in the response. Any delete requests that
|
||||
completed successfully still stick, they are not rolled back.
|
||||
and all failed requests are returned in the response. Any delete requests that
|
||||
completed successfully still stick, they are not rolled back.
|
||||
|
||||
You can opt to count version conflicts instead of halting and returning by
|
||||
setting `conflicts` to `proceed`.
|
||||
You can opt to count version conflicts instead of halting and returning by
|
||||
setting `conflicts` to `proceed`.
|
||||
|
||||
===== Refreshing shards
|
||||
|
||||
Specifying the `refresh` parameter refreshes all shards involved in the delete
|
||||
by query once the request completes. This is different than the delete API's
|
||||
`refresh` parameter, which causes just the shard that received the delete
|
||||
request to be refreshed. Unlike the delete API, it does not support
|
||||
by query once the request completes. This is different than the delete API's
|
||||
`refresh` parameter, which causes just the shard that received the delete
|
||||
request to be refreshed. Unlike the delete API, it does not support
|
||||
`wait_for`.
|
||||
|
||||
[[docs-delete-by-query-task-api]]
|
||||
|
@ -90,7 +90,7 @@ request to be refreshed. Unlike the delete API, it does not support
|
|||
If the request contains `wait_for_completion=false`, {es}
|
||||
performs some preflight checks, launches the request, and returns a
|
||||
<<tasks,`task`>> you can use to cancel or get the status of the task. {es} creates a
|
||||
record of this task as a document at `.tasks/task/${taskId}`. When you are
|
||||
record of this task as a document at `.tasks/task/${taskId}`. When you are
|
||||
done with a task, you should delete the task document so {es} can reclaim the
|
||||
space.
|
||||
|
||||
|
@ -101,20 +101,20 @@ before proceeding with the request. See <<index-wait-for-active-shards>>
|
|||
for details. `timeout` controls how long each write request waits for unavailable
|
||||
shards to become available. Both work exactly the way they work in the
|
||||
<<docs-bulk,Bulk API>>. Delete by query uses scrolled searches, so you can also
|
||||
specify the `scroll` parameter to control how long it keeps the search context
|
||||
specify the `scroll` parameter to control how long it keeps the search context
|
||||
alive, for example `?scroll=10m`. The default is 5 minutes.
|
||||
|
||||
===== Throttling delete requests
|
||||
|
||||
To control the rate at which delete by query issues batches of delete operations,
|
||||
you can set `requests_per_second` to any positive decimal number. This pads each
|
||||
batch with a wait time to throttle the rate. Set `requests_per_second` to `-1`
|
||||
batch with a wait time to throttle the rate. Set `requests_per_second` to `-1`
|
||||
to disable throttling.
|
||||
|
||||
Throttling uses a wait time between batches so that the internal scroll requests
|
||||
can be given a timeout that takes the request padding into account. The padding
|
||||
time is the difference between the batch size divided by the
|
||||
`requests_per_second` and the time spent writing. By default the batch size is
|
||||
Throttling uses a wait time between batches so that the internal scroll requests
|
||||
can be given a timeout that takes the request padding into account. The padding
|
||||
time is the difference between the batch size divided by the
|
||||
`requests_per_second` and the time spent writing. By default the batch size is
|
||||
`1000`, so if `requests_per_second` is set to `500`:
|
||||
|
||||
[source,txt]
|
||||
|
@ -123,9 +123,9 @@ target_time = 1000 / 500 per second = 2 seconds
|
|||
wait_time = target_time - write_time = 2 seconds - .5 seconds = 1.5 seconds
|
||||
--------------------------------------------------
|
||||
|
||||
Since the batch is issued as a single `_bulk` request, large batch sizes
|
||||
cause {es} to create many requests and wait before starting the next set.
|
||||
This is "bursty" instead of "smooth".
|
||||
Since the batch is issued as a single `_bulk` request, large batch sizes
|
||||
cause {es} to create many requests and wait before starting the next set.
|
||||
This is "bursty" instead of "smooth".
|
||||
|
||||
[[docs-delete-by-query-slice]]
|
||||
===== Slicing
|
||||
|
@ -134,11 +134,11 @@ Delete by query supports <<slice-scroll, sliced scroll>> to parallelize the
|
|||
delete process. This can improve efficiency and provide a
|
||||
convenient way to break the request down into smaller parts.
|
||||
|
||||
Setting `slices` to `auto` chooses a reasonable number for most data streams and indices.
|
||||
If you're slicing manually or otherwise tuning automatic slicing, keep in mind
|
||||
Setting `slices` to `auto` chooses a reasonable number for most data streams and indices.
|
||||
If you're slicing manually or otherwise tuning automatic slicing, keep in mind
|
||||
that:
|
||||
|
||||
* Query performance is most efficient when the number of `slices` is equal to
|
||||
* Query performance is most efficient when the number of `slices` is equal to
|
||||
the number of shards in the index or backing index. If that number is large (for example,
|
||||
500), choose a lower number as too many `slices` hurts performance. Setting
|
||||
`slices` higher than the number of shards generally does not improve efficiency
|
||||
|
@ -171,15 +171,15 @@ Defaults to `true`.
|
|||
include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=analyzer]
|
||||
|
||||
include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=analyze_wildcard]
|
||||
|
||||
|
||||
`conflicts`::
|
||||
(Optional, string) What to do if delete by query hits version conflicts:
|
||||
(Optional, string) What to do if delete by query hits version conflicts:
|
||||
`abort` or `proceed`. Defaults to `abort`.
|
||||
|
||||
include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=default_operator]
|
||||
|
||||
include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=df]
|
||||
|
||||
|
||||
include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=expand-wildcards]
|
||||
+
|
||||
Defaults to `open`.
|
||||
|
@ -187,9 +187,9 @@ Defaults to `open`.
|
|||
include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=from]
|
||||
|
||||
include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=index-ignore-unavailable]
|
||||
|
||||
|
||||
include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=lenient]
|
||||
|
||||
|
||||
include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=max_docs]
|
||||
|
||||
include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=preference]
|
||||
|
@ -214,9 +214,9 @@ include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=scroll_size]
|
|||
include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=search_type]
|
||||
|
||||
include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=search_timeout]
|
||||
|
||||
|
||||
include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=slices]
|
||||
|
||||
|
||||
include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=sort]
|
||||
|
||||
include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=source]
|
||||
|
@ -226,7 +226,7 @@ include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=source_excludes]
|
|||
include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=source_includes]
|
||||
|
||||
include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=stats]
|
||||
|
||||
|
||||
include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=terminate_after]
|
||||
|
||||
include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=timeout]
|
||||
|
@ -239,9 +239,9 @@ include::{docdir}/rest-api/common-parms.asciidoc[tag=wait_for_active_shards]
|
|||
==== {api-request-body-title}
|
||||
|
||||
`query`::
|
||||
(Optional, <<query-dsl,query object>>) Specifies the documents to delete
|
||||
(Optional, <<query-dsl,query object>>) Specifies the documents to delete
|
||||
using the <<query-dsl,Query DSL>>.
|
||||
|
||||
|
||||
|
||||
[[docs-delete-by-query-api-response-body]]
|
||||
==== Response body
|
||||
|
@ -345,7 +345,7 @@ this is non-empty then the request aborted because of those failures.
|
|||
Delete by query is implemented using batches, and any failure causes the entire
|
||||
process to abort but all failures in the current batch are collected into the
|
||||
array. You can use the `conflicts` option to prevent reindex from aborting on
|
||||
version conflicts.
|
||||
version conflicts.
|
||||
|
||||
[[docs-delete-by-query-api-example]]
|
||||
==== {api-examples-title}
|
||||
|
@ -377,7 +377,7 @@ POST /my-index-000001,my-index-000002/_delete_by_query
|
|||
// TEST[s/^/PUT my-index-000001\nPUT my-index-000002\n/]
|
||||
|
||||
Limit the delete by query operation to shards that a particular routing
|
||||
value:
|
||||
value:
|
||||
|
||||
[source,console]
|
||||
--------------------------------------------------
|
||||
|
@ -571,7 +571,7 @@ though these are all taken at approximately the same time.
|
|||
|
||||
The value of `requests_per_second` can be changed on a running delete by query
|
||||
using the `_rethrottle` API. Rethrottling that speeds up the
|
||||
query takes effect immediately but rethrotting that slows down the query
|
||||
query takes effect immediately but rethrotting that slows down the query
|
||||
takes effect after completing the current batch to prevent scroll
|
||||
timeouts.
|
||||
|
||||
|
@ -670,6 +670,6 @@ POST _tasks/r1A2WoRbTwKZ516z6NEs5A:36619/_cancel
|
|||
|
||||
The task ID can be found using the <<tasks,tasks API>>.
|
||||
|
||||
Cancellation should happen quickly but might take a few seconds. The task status
|
||||
API above will continue to list the delete by query task until this task checks that it
|
||||
Cancellation should happen quickly but might take a few seconds. The task status
|
||||
API above will continue to list the delete by query task until this task checks that it
|
||||
has been cancelled and terminates itself.
|
||||
|
|
|
@ -4,7 +4,7 @@
|
|||
<titleabbrev>Update by query</titleabbrev>
|
||||
++++
|
||||
|
||||
Updates documents that match the specified query.
|
||||
Updates documents that match the specified query.
|
||||
If no query is specified, performs an update on every document in the data stream or index without
|
||||
modifying the source, which is useful for picking up mapping changes.
|
||||
|
||||
|
@ -50,33 +50,33 @@ POST my-index-000001/_update_by_query?conflicts=proceed
|
|||
==== {api-description-title}
|
||||
|
||||
You can specify the query criteria in the request URI or the request body
|
||||
using the same syntax as the <<search-search,Search API>>.
|
||||
using the same syntax as the <<search-search,Search API>>.
|
||||
|
||||
When you submit an update by query request, {es} gets a snapshot of the data stream or index
|
||||
when it begins processing the request and updates matching documents using
|
||||
`internal` versioning.
|
||||
When the versions match, the document is updated and the version number is incremented.
|
||||
If a document changes between the time that the snapshot is taken and
|
||||
the update operation is processed, it results in a version conflict and the operation fails.
|
||||
You can opt to count version conflicts instead of halting and returning by
|
||||
setting `conflicts` to `proceed`.
|
||||
`internal` versioning.
|
||||
When the versions match, the document is updated and the version number is incremented.
|
||||
If a document changes between the time that the snapshot is taken and
|
||||
the update operation is processed, it results in a version conflict and the operation fails.
|
||||
You can opt to count version conflicts instead of halting and returning by
|
||||
setting `conflicts` to `proceed`.
|
||||
|
||||
NOTE: Documents with a version equal to 0 cannot be updated using update by
|
||||
query because `internal` versioning does not support 0 as a valid
|
||||
version number.
|
||||
|
||||
While processing an update by query request, {es} performs multiple search
|
||||
requests sequentially to find all of the matching documents.
|
||||
A bulk update request is performed for each batch of matching documents.
|
||||
Any query or update failures cause the update by query request to fail and
|
||||
requests sequentially to find all of the matching documents.
|
||||
A bulk update request is performed for each batch of matching documents.
|
||||
Any query or update failures cause the update by query request to fail and
|
||||
the failures are shown in the response.
|
||||
Any update requests that completed successfully still stick, they are not rolled back.
|
||||
|
||||
===== Refreshing shards
|
||||
|
||||
Specifying the `refresh` parameter refreshes all shards once the request completes.
|
||||
Specifying the `refresh` parameter refreshes all shards once the request completes.
|
||||
This is different than the update API's `refresh` parameter, which causes just the shard
|
||||
that received the request to be refreshed. Unlike the update API, it does not support
|
||||
that received the request to be refreshed. Unlike the update API, it does not support
|
||||
`wait_for`.
|
||||
|
||||
[[docs-update-by-query-task-api]]
|
||||
|
@ -84,9 +84,9 @@ that received the request to be refreshed. Unlike the update API, it does not su
|
|||
|
||||
If the request contains `wait_for_completion=false`, {es}
|
||||
performs some preflight checks, launches the request, and returns a
|
||||
<<tasks,`task`>> you can use to cancel or get the status of the task.
|
||||
{es} creates a record of this task as a document at `.tasks/task/${taskId}`.
|
||||
When you are done with a task, you should delete the task document so
|
||||
<<tasks,`task`>> you can use to cancel or get the status of the task.
|
||||
{es} creates a record of this task as a document at `.tasks/task/${taskId}`.
|
||||
When you are done with a task, you should delete the task document so
|
||||
{es} can reclaim the space.
|
||||
|
||||
===== Waiting for active shards
|
||||
|
@ -96,20 +96,20 @@ before proceeding with the request. See <<index-wait-for-active-shards>>
|
|||
for details. `timeout` controls how long each write request waits for unavailable
|
||||
shards to become available. Both work exactly the way they work in the
|
||||
<<docs-bulk,Bulk API>>. Update by query uses scrolled searches, so you can also
|
||||
specify the `scroll` parameter to control how long it keeps the search context
|
||||
specify the `scroll` parameter to control how long it keeps the search context
|
||||
alive, for example `?scroll=10m`. The default is 5 minutes.
|
||||
|
||||
===== Throttling update requests
|
||||
|
||||
To control the rate at which update by query issues batches of update operations,
|
||||
you can set `requests_per_second` to any positive decimal number. This pads each
|
||||
batch with a wait time to throttle the rate. Set `requests_per_second` to `-1`
|
||||
batch with a wait time to throttle the rate. Set `requests_per_second` to `-1`
|
||||
to disable throttling.
|
||||
|
||||
Throttling uses a wait time between batches so that the internal scroll requests
|
||||
can be given a timeout that takes the request padding into account. The padding
|
||||
time is the difference between the batch size divided by the
|
||||
`requests_per_second` and the time spent writing. By default the batch size is
|
||||
Throttling uses a wait time between batches so that the internal scroll requests
|
||||
can be given a timeout that takes the request padding into account. The padding
|
||||
time is the difference between the batch size divided by the
|
||||
`requests_per_second` and the time spent writing. By default the batch size is
|
||||
`1000`, so if `requests_per_second` is set to `500`:
|
||||
|
||||
[source,txt]
|
||||
|
@ -118,9 +118,9 @@ target_time = 1000 / 500 per second = 2 seconds
|
|||
wait_time = target_time - write_time = 2 seconds - .5 seconds = 1.5 seconds
|
||||
--------------------------------------------------
|
||||
|
||||
Since the batch is issued as a single `_bulk` request, large batch sizes
|
||||
cause {es} to create many requests and wait before starting the next set.
|
||||
This is "bursty" instead of "smooth".
|
||||
Since the batch is issued as a single `_bulk` request, large batch sizes
|
||||
cause {es} to create many requests and wait before starting the next set.
|
||||
This is "bursty" instead of "smooth".
|
||||
|
||||
[[docs-update-by-query-slice]]
|
||||
===== Slicing
|
||||
|
@ -129,11 +129,11 @@ Update by query supports <<slice-scroll, sliced scroll>> to parallelize the
|
|||
update process. This can improve efficiency and provide a
|
||||
convenient way to break the request down into smaller parts.
|
||||
|
||||
Setting `slices` to `auto` chooses a reasonable number for most data streams and indices.
|
||||
If you're slicing manually or otherwise tuning automatic slicing, keep in mind
|
||||
Setting `slices` to `auto` chooses a reasonable number for most data streams and indices.
|
||||
If you're slicing manually or otherwise tuning automatic slicing, keep in mind
|
||||
that:
|
||||
|
||||
* Query performance is most efficient when the number of `slices` is equal to
|
||||
* Query performance is most efficient when the number of `slices` is equal to
|
||||
the number of shards in the index or backing index. If that number is large (for example,
|
||||
500), choose a lower number as too many `slices` hurts performance. Setting
|
||||
`slices` higher than the number of shards generally does not improve efficiency
|
||||
|
@ -166,15 +166,15 @@ Defaults to `true`.
|
|||
include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=analyzer]
|
||||
|
||||
include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=analyze_wildcard]
|
||||
|
||||
|
||||
`conflicts`::
|
||||
(Optional, string) What to do if update by query hits version conflicts:
|
||||
(Optional, string) What to do if update by query hits version conflicts:
|
||||
`abort` or `proceed`. Defaults to `abort`.
|
||||
|
||||
include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=default_operator]
|
||||
|
||||
include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=df]
|
||||
|
||||
|
||||
include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=expand-wildcards]
|
||||
+
|
||||
Defaults to `open`.
|
||||
|
@ -182,9 +182,9 @@ Defaults to `open`.
|
|||
include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=from]
|
||||
|
||||
include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=index-ignore-unavailable]
|
||||
|
||||
|
||||
include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=lenient]
|
||||
|
||||
|
||||
include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=max_docs]
|
||||
|
||||
include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=pipeline]
|
||||
|
@ -211,9 +211,9 @@ include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=scroll_size]
|
|||
include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=search_type]
|
||||
|
||||
include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=search_timeout]
|
||||
|
||||
|
||||
include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=slices]
|
||||
|
||||
|
||||
include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=sort]
|
||||
|
||||
include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=source]
|
||||
|
@ -223,7 +223,7 @@ include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=source_excludes]
|
|||
include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=source_includes]
|
||||
|
||||
include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=stats]
|
||||
|
||||
|
||||
include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=terminate_after]
|
||||
|
||||
include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=timeout]
|
||||
|
@ -236,9 +236,9 @@ include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=wait_for_active_shards
|
|||
==== {api-request-body-title}
|
||||
|
||||
`query`::
|
||||
(Optional, <<query-dsl,query object>>) Specifies the documents to update
|
||||
(Optional, <<query-dsl,query object>>) Specifies the documents to update
|
||||
using the <<query-dsl,Query DSL>>.
|
||||
|
||||
|
||||
|
||||
[[docs-update-by-query-api-response-body]]
|
||||
==== Response body
|
||||
|
@ -336,7 +336,7 @@ POST my-index-000001/_update_by_query?routing=1
|
|||
--------------------------------------------------
|
||||
// TEST[setup:my_index]
|
||||
|
||||
By default update by query uses scroll batches of 1000.
|
||||
By default update by query uses scroll batches of 1000.
|
||||
You can change the batch size with the `scroll_size` parameter:
|
||||
|
||||
[source,console]
|
||||
|
@ -348,7 +348,7 @@ POST my-index-000001/_update_by_query?scroll_size=100
|
|||
[[docs-update-by-query-api-source]]
|
||||
===== Update the document source
|
||||
|
||||
Update by query supports scripts to update the document source.
|
||||
Update by query supports scripts to update the document source.
|
||||
For example, the following request increments the `count` field for all
|
||||
documents with a `user.id` of `kimchy` in `my-index-000001`:
|
||||
|
||||
|
@ -390,16 +390,16 @@ operation that is performed:
|
|||
|
||||
[horizontal]
|
||||
`noop`::
|
||||
Set `ctx.op = "noop"` if your script decides that it doesn't have to make any changes.
|
||||
Set `ctx.op = "noop"` if your script decides that it doesn't have to make any changes.
|
||||
The update by query operation skips updating the document and increments the `noop` counter.
|
||||
|
||||
`delete`::
|
||||
Set `ctx.op = "delete"` if your script decides that the document should be deleted.
|
||||
Set `ctx.op = "delete"` if your script decides that the document should be deleted.
|
||||
The update by query operation deletes the document and increments the `deleted` counter.
|
||||
|
||||
Update by query only supports `update`, `noop`, and `delete`.
|
||||
Setting `ctx.op` to anything else is an error. Setting any other field in `ctx` is an error.
|
||||
This API only enables you to modify the source of matching documents, you cannot move them.
|
||||
This API only enables you to modify the source of matching documents, you cannot move them.
|
||||
|
||||
[[docs-update-by-query-api-ingest-pipeline]]
|
||||
===== Update documents using an ingest pipeline
|
||||
|
@ -485,7 +485,7 @@ of operations that the reindex expects to perform. You can estimate the
|
|||
progress by adding the `updated`, `created`, and `deleted` fields. The request
|
||||
will finish when their sum is equal to the `total` field.
|
||||
|
||||
With the task id you can look up the task directly. The following example
|
||||
With the task id you can look up the task directly. The following example
|
||||
retrieves information about task `r1A2WoRbTwKZ516z6NEs5A:36619`:
|
||||
|
||||
[source,console]
|
||||
|
@ -515,8 +515,8 @@ POST _tasks/r1A2WoRbTwKZ516z6NEs5A:36619/_cancel
|
|||
|
||||
The task ID can be found using the <<tasks,tasks API>>.
|
||||
|
||||
Cancellation should happen quickly but might take a few seconds. The task status
|
||||
API above will continue to list the update by query task until this task checks
|
||||
Cancellation should happen quickly but might take a few seconds. The task status
|
||||
API above will continue to list the update by query task until this task checks
|
||||
that it has been cancelled and terminates itself.
|
||||
|
||||
|
||||
|
|
|
@ -49,7 +49,7 @@ refresh operation completes.
|
|||
====
|
||||
Refreshes are resource-intensive.
|
||||
To ensure good cluster performance,
|
||||
we recommend waiting for {es}'s periodic refresh
|
||||
we recommend waiting for {es}'s periodic refresh
|
||||
rather than performing an explicit refresh
|
||||
when possible.
|
||||
|
||||
|
|
|
@ -60,7 +60,7 @@ Defaults to `false`.
|
|||
==== {api-response-body-title}
|
||||
|
||||
`<segment>`::
|
||||
(String)
|
||||
(String)
|
||||
include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=segment]
|
||||
|
||||
`generation`::
|
||||
|
@ -83,7 +83,7 @@ include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=segment-size]
|
|||
(Integer)
|
||||
include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=memory]
|
||||
|
||||
`committed`::
|
||||
`committed`::
|
||||
(Boolean)
|
||||
include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=committed]
|
||||
|
||||
|
|
|
@ -33,7 +33,7 @@ more data streams and indices.
|
|||
By default,
|
||||
the returned statistics are index-level
|
||||
with `primaries` and `total` aggregations.
|
||||
`primaries` are the values for only the primary shards.
|
||||
`primaries` are the values for only the primary shards.
|
||||
`total` are the accumulated values for both primary and replica shards.
|
||||
|
||||
To get shard-level statistics,
|
||||
|
|
|
@ -147,7 +147,7 @@ and reopen the index.
|
|||
|
||||
[NOTE]
|
||||
====
|
||||
You cannot close the write index of a data stream.
|
||||
You cannot close the write index of a data stream.
|
||||
|
||||
To update the analyzer for a data stream's write index and future backing
|
||||
indices, update the analyzer in the <<create-a-data-stream-template,index
|
||||
|
|
|
@ -669,7 +669,7 @@ The method for transforming the data. These objects define the pivot function
|
|||
end::pivot[]
|
||||
|
||||
tag::pivot-aggs[]
|
||||
Defines how to aggregate the grouped data. The following aggregations are
|
||||
Defines how to aggregate the grouped data. The following aggregations are
|
||||
supported:
|
||||
+
|
||||
--
|
||||
|
@ -691,7 +691,7 @@ supported:
|
|||
* <<search-aggregations-metrics-weight-avg-aggregation,Weighted average>>
|
||||
|
||||
|
||||
IMPORTANT: {transforms-cap} support a subset of the functionality in
|
||||
IMPORTANT: {transforms-cap} support a subset of the functionality in
|
||||
aggregations. See <<transform-limitations>>.
|
||||
|
||||
--
|
||||
|
@ -703,7 +703,7 @@ Defines how to group the data. More than one grouping can be defined
|
|||
+
|
||||
--
|
||||
* <<_date_histogram,Date histogram>>
|
||||
* <<_geotile_grid,Geotile Grid>>
|
||||
* <<_geotile_grid,Geotile Grid>>
|
||||
* <<_histogram,Histogram>>
|
||||
* <<_terms,Terms>>
|
||||
|
||||
|
|
|
@ -22,16 +22,16 @@ the <<search-search,search API>> works.
|
|||
[[search-count-api-desc]]
|
||||
==== {api-description-title}
|
||||
|
||||
The count API allows you to execute a query and get the number of matches for
|
||||
that query. The query can either
|
||||
be provided using a simple query string as a parameter, or using the
|
||||
The count API allows you to execute a query and get the number of matches for
|
||||
that query. The query can either
|
||||
be provided using a simple query string as a parameter, or using the
|
||||
<<query-dsl,Query DSL>> defined within the request body.
|
||||
|
||||
The count API supports <<multi-index,multi-target syntax>>. You can run a single
|
||||
count API search across multiple data streams and indices.
|
||||
|
||||
The operation is broadcast across all shards. For each shard id group, a replica
|
||||
is chosen and executed against it. This means that replicas increase the
|
||||
The operation is broadcast across all shards. For each shard id group, a replica
|
||||
is chosen and executed against it. This means that replicas increase the
|
||||
scalability of count.
|
||||
|
||||
|
||||
|
@ -74,7 +74,7 @@ include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=lenient]
|
|||
|
||||
`min_score`::
|
||||
(Optional, float)
|
||||
Sets the minimum `_score` value that documents must have to be included in the
|
||||
Sets the minimum `_score` value that documents must have to be included in the
|
||||
result.
|
||||
|
||||
include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=preference]
|
||||
|
|
|
@ -4,7 +4,7 @@
|
|||
<titleabbrev>Ranking evaluation</titleabbrev>
|
||||
++++
|
||||
|
||||
Allows you to evaluate the quality of ranked search results over a set of
|
||||
Allows you to evaluate the quality of ranked search results over a set of
|
||||
typical search queries.
|
||||
|
||||
[[search-rank-eval-api-request]]
|
||||
|
@ -18,46 +18,46 @@ typical search queries.
|
|||
[[search-rank-eval-api-desc]]
|
||||
==== {api-description-title}
|
||||
|
||||
The ranking evaluation API allows you to evaluate the quality of ranked search
|
||||
The ranking evaluation API allows you to evaluate the quality of ranked search
|
||||
results over a set of typical search queries. Given this set of queries and a
|
||||
list of manually rated documents, the `_rank_eval` endpoint calculates and
|
||||
returns typical information retrieval metrics like _mean reciprocal rank_,
|
||||
_precision_ or _discounted cumulative gain_.
|
||||
|
||||
Search quality evaluation starts with looking at the users of your search
|
||||
application, and the things that they are searching for. Users have a specific
|
||||
_information need_; for example, they are looking for gift in a web shop or want
|
||||
to book a flight for their next holiday. They usually enter some search terms
|
||||
into a search box or some other web form. All of this information, together with
|
||||
meta information about the user (for example the browser, location, earlier
|
||||
preferences and so on) then gets translated into a query to the underlying
|
||||
Search quality evaluation starts with looking at the users of your search
|
||||
application, and the things that they are searching for. Users have a specific
|
||||
_information need_; for example, they are looking for gift in a web shop or want
|
||||
to book a flight for their next holiday. They usually enter some search terms
|
||||
into a search box or some other web form. All of this information, together with
|
||||
meta information about the user (for example the browser, location, earlier
|
||||
preferences and so on) then gets translated into a query to the underlying
|
||||
search system.
|
||||
|
||||
The challenge for search engineers is to tweak this translation process from
|
||||
user entries to a concrete query, in such a way that the search results contain
|
||||
the most relevant information with respect to the user's information need. This
|
||||
can only be done if the search result quality is evaluated constantly across a
|
||||
representative test suite of typical user queries, so that improvements in the
|
||||
rankings for one particular query don't negatively affect the ranking for
|
||||
The challenge for search engineers is to tweak this translation process from
|
||||
user entries to a concrete query, in such a way that the search results contain
|
||||
the most relevant information with respect to the user's information need. This
|
||||
can only be done if the search result quality is evaluated constantly across a
|
||||
representative test suite of typical user queries, so that improvements in the
|
||||
rankings for one particular query don't negatively affect the ranking for
|
||||
other types of queries.
|
||||
|
||||
In order to get started with search quality evaluation, you need three basic
|
||||
things:
|
||||
|
||||
. A collection of documents you want to evaluate your query performance against,
|
||||
. A collection of documents you want to evaluate your query performance against,
|
||||
usually one or more data streams or indices.
|
||||
. A collection of typical search requests that users enter into your system.
|
||||
. A set of document ratings that represent the documents' relevance with respect
|
||||
to a search request.
|
||||
|
||||
It is important to note that one set of document ratings is needed per test
|
||||
query, and that the relevance judgements are based on the information need of
|
||||
|
||||
It is important to note that one set of document ratings is needed per test
|
||||
query, and that the relevance judgements are based on the information need of
|
||||
the user that entered the query.
|
||||
|
||||
The ranking evaluation API provides a convenient way to use this information in
|
||||
a ranking evaluation request to calculate different search evaluation metrics.
|
||||
This gives you a first estimation of your overall search quality, as well as a
|
||||
measurement to optimize against when fine-tuning various aspect of the query
|
||||
The ranking evaluation API provides a convenient way to use this information in
|
||||
a ranking evaluation request to calculate different search evaluation metrics.
|
||||
This gives you a first estimation of your overall search quality, as well as a
|
||||
measurement to optimize against when fine-tuning various aspect of the query
|
||||
generation in your application.
|
||||
|
||||
|
||||
|
@ -97,7 +97,7 @@ In its most basic form, a request to the `_rank_eval` endpoint has two sections:
|
|||
-----------------------------
|
||||
GET /my-index-000001/_rank_eval
|
||||
{
|
||||
"requests": [ ... ], <1>
|
||||
"requests": [ ... ], <1>
|
||||
"metric": { <2>
|
||||
"mean_reciprocal_rank": { ... } <3>
|
||||
}
|
||||
|
@ -109,7 +109,7 @@ GET /my-index-000001/_rank_eval
|
|||
<2> definition of the evaluation metric to calculate
|
||||
<3> a specific metric and its parameters
|
||||
|
||||
The request section contains several search requests typical to your
|
||||
The request section contains several search requests typical to your
|
||||
application, along with the document ratings for each particular search request.
|
||||
|
||||
[source,js]
|
||||
|
@ -122,7 +122,7 @@ GET /my-index-000001/_rank_eval
|
|||
"request": { <2>
|
||||
"query": { "match": { "text": "amsterdam" } }
|
||||
},
|
||||
"ratings": [ <3>
|
||||
"ratings": [ <3>
|
||||
{ "_index": "my-index-000001", "_id": "doc1", "rating": 0 },
|
||||
{ "_index": "my-index-000001", "_id": "doc2", "rating": 3 },
|
||||
{ "_index": "my-index-000001", "_id": "doc3", "rating": 1 }
|
||||
|
@ -150,38 +150,38 @@ GET /my-index-000001/_rank_eval
|
|||
- `_id`: The document ID.
|
||||
- `rating`: The document's relevance with regard to this search request.
|
||||
|
||||
A document `rating` can be any integer value that expresses the relevance of the
|
||||
document on a user-defined scale. For some of the metrics, just giving a binary
|
||||
rating (for example `0` for irrelevant and `1` for relevant) will be sufficient,
|
||||
A document `rating` can be any integer value that expresses the relevance of the
|
||||
document on a user-defined scale. For some of the metrics, just giving a binary
|
||||
rating (for example `0` for irrelevant and `1` for relevant) will be sufficient,
|
||||
while other metrics can use a more fine-grained scale.
|
||||
|
||||
|
||||
===== Template-based ranking evaluation
|
||||
|
||||
As an alternative to having to provide a single query per test request, it is
|
||||
possible to specify query templates in the evaluation request and later refer to
|
||||
them. This way, queries with a similar structure that differ only in their
|
||||
parameters don't have to be repeated all the time in the `requests` section.
|
||||
In typical search systems, where user inputs usually get filled into a small
|
||||
As an alternative to having to provide a single query per test request, it is
|
||||
possible to specify query templates in the evaluation request and later refer to
|
||||
them. This way, queries with a similar structure that differ only in their
|
||||
parameters don't have to be repeated all the time in the `requests` section.
|
||||
In typical search systems, where user inputs usually get filled into a small
|
||||
set of query templates, this helps make the evaluation request more succinct.
|
||||
|
||||
[source,js]
|
||||
--------------------------------
|
||||
GET /my-index-000001/_rank_eval
|
||||
{
|
||||
{
|
||||
[...]
|
||||
"templates": [
|
||||
{
|
||||
"id": "match_one_field_query", <1>
|
||||
"template": { <2>
|
||||
"inline": {
|
||||
"query": {
|
||||
"inline": {
|
||||
"query": {
|
||||
"match": { "{{field}}": { "query": "{{query_string}}" }}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
],
|
||||
],
|
||||
"requests": [
|
||||
{
|
||||
"id": "amsterdam_query"
|
||||
|
@ -197,7 +197,7 @@ GET /my-index-000001/_rank_eval
|
|||
--------------------------------
|
||||
// NOTCONSOLE
|
||||
|
||||
<1> the template id
|
||||
<1> the template id
|
||||
<2> the template definition to use
|
||||
<3> a reference to a previously defined template
|
||||
<4> the parameters to use to fill the template
|
||||
|
@ -205,7 +205,7 @@ GET /my-index-000001/_rank_eval
|
|||
|
||||
===== Available evaluation metrics
|
||||
|
||||
The `metric` section determines which of the available evaluation metrics
|
||||
The `metric` section determines which of the available evaluation metrics
|
||||
will be used. The following metrics are supported:
|
||||
|
||||
[discrete]
|
||||
|
@ -254,8 +254,8 @@ The `precision` metric takes the following optional parameters
|
|||
[cols="<,<",options="header",]
|
||||
|=======================================================================
|
||||
|Parameter |Description
|
||||
|`k` |sets the maximum number of documents retrieved per query. This value will act in place of the usual `size` parameter
|
||||
in the query. Defaults to 10.
|
||||
|`k` |sets the maximum number of documents retrieved per query. This value will act in place of the usual `size` parameter
|
||||
in the query. Defaults to 10.
|
||||
|`relevant_rating_threshold` |sets the rating threshold above which documents are considered to be
|
||||
"relevant". Defaults to `1`.
|
||||
|`ignore_unlabeled` |controls how unlabeled documents in the search results are counted.
|
||||
|
@ -318,10 +318,10 @@ in the query. Defaults to 10.
|
|||
[discrete]
|
||||
===== Mean reciprocal rank
|
||||
|
||||
For every query in the test suite, this metric calculates the reciprocal of the
|
||||
rank of the first relevant document. For example, finding the first relevant
|
||||
result in position 3 means the reciprocal rank is 1/3. The reciprocal rank for
|
||||
each query is averaged across all queries in the test suite to give the
|
||||
For every query in the test suite, this metric calculates the reciprocal of the
|
||||
rank of the first relevant document. For example, finding the first relevant
|
||||
result in position 3 means the reciprocal rank is 1/3. The reciprocal rank for
|
||||
each query is averaged across all queries in the test suite to give the
|
||||
{wikipedia}/Mean_reciprocal_rank[mean reciprocal rank].
|
||||
|
||||
[source,console]
|
||||
|
@ -349,7 +349,7 @@ The `mean_reciprocal_rank` metric takes the following optional parameters
|
|||
[cols="<,<",options="header",]
|
||||
|=======================================================================
|
||||
|Parameter |Description
|
||||
|`k` |sets the maximum number of documents retrieved per query. This value will act in place of the usual `size` parameter
|
||||
|`k` |sets the maximum number of documents retrieved per query. This value will act in place of the usual `size` parameter
|
||||
in the query. Defaults to 10.
|
||||
|`relevant_rating_threshold` |Sets the rating threshold above which documents are considered to be
|
||||
"relevant". Defaults to `1`.
|
||||
|
@ -359,13 +359,13 @@ in the query. Defaults to 10.
|
|||
[discrete]
|
||||
===== Discounted cumulative gain (DCG)
|
||||
|
||||
In contrast to the two metrics above,
|
||||
{wikipedia}/Discounted_cumulative_gain[discounted cumulative gain]
|
||||
In contrast to the two metrics above,
|
||||
{wikipedia}/Discounted_cumulative_gain[discounted cumulative gain]
|
||||
takes both the rank and the rating of the search results into account.
|
||||
|
||||
The assumption is that highly relevant documents are more useful for the user
|
||||
when appearing at the top of the result list. Therefore, the DCG formula reduces
|
||||
the contribution that high ratings for documents on lower search ranks have on
|
||||
The assumption is that highly relevant documents are more useful for the user
|
||||
when appearing at the top of the result list. Therefore, the DCG formula reduces
|
||||
the contribution that high ratings for documents on lower search ranks have on
|
||||
the overall DCG metric.
|
||||
|
||||
[source,console]
|
||||
|
@ -393,7 +393,7 @@ The `dcg` metric takes the following optional parameters:
|
|||
[cols="<,<",options="header",]
|
||||
|=======================================================================
|
||||
|Parameter |Description
|
||||
|`k` |sets the maximum number of documents retrieved per query. This value will act in place of the usual `size` parameter
|
||||
|`k` |sets the maximum number of documents retrieved per query. This value will act in place of the usual `size` parameter
|
||||
in the query. Defaults to 10.
|
||||
|`normalize` | If set to `true`, this metric will calculate the {wikipedia}/Discounted_cumulative_gain#Normalized_DCG[Normalized DCG].
|
||||
|=======================================================================
|
||||
|
@ -402,26 +402,26 @@ in the query. Defaults to 10.
|
|||
[discrete]
|
||||
===== Expected Reciprocal Rank (ERR)
|
||||
|
||||
Expected Reciprocal Rank (ERR) is an extension of the classical reciprocal rank
|
||||
for the graded relevance case (Olivier Chapelle, Donald Metzler, Ya Zhang, and
|
||||
Pierre Grinspan. 2009.
|
||||
Expected Reciprocal Rank (ERR) is an extension of the classical reciprocal rank
|
||||
for the graded relevance case (Olivier Chapelle, Donald Metzler, Ya Zhang, and
|
||||
Pierre Grinspan. 2009.
|
||||
https://olivier.chapelle.cc/pub/err.pdf[Expected reciprocal rank for graded relevance].)
|
||||
|
||||
It is based on the assumption of a cascade model of search, in which a user
|
||||
scans through ranked search results in order and stops at the first document
|
||||
that satisfies the information need. For this reason, it is a good metric for
|
||||
question answering and navigation queries, but less so for survey-oriented
|
||||
information needs where the user is interested in finding many relevant
|
||||
It is based on the assumption of a cascade model of search, in which a user
|
||||
scans through ranked search results in order and stops at the first document
|
||||
that satisfies the information need. For this reason, it is a good metric for
|
||||
question answering and navigation queries, but less so for survey-oriented
|
||||
information needs where the user is interested in finding many relevant
|
||||
documents in the top k results.
|
||||
|
||||
The metric models the expectation of the reciprocal of the position at which a
|
||||
The metric models the expectation of the reciprocal of the position at which a
|
||||
user stops reading through the result list. This means that a relevant document
|
||||
in a top ranking position will have a large contribution to the overall score.
|
||||
However, the same document will contribute much less to the score if it appears
|
||||
in a lower rank; even more so if there are some relevant (but maybe less relevant)
|
||||
documents preceding it. In this way, the ERR metric discounts documents that
|
||||
are shown after very relevant documents. This introduces a notion of dependency
|
||||
in the ordering of relevant documents that e.g. Precision or DCG don't account
|
||||
in a top ranking position will have a large contribution to the overall score.
|
||||
However, the same document will contribute much less to the score if it appears
|
||||
in a lower rank; even more so if there are some relevant (but maybe less relevant)
|
||||
documents preceding it. In this way, the ERR metric discounts documents that
|
||||
are shown after very relevant documents. This introduces a notion of dependency
|
||||
in the ordering of relevant documents that e.g. Precision or DCG don't account
|
||||
for.
|
||||
|
||||
[source,console]
|
||||
|
@ -458,9 +458,9 @@ in the query. Defaults to 10.
|
|||
|
||||
===== Response format
|
||||
|
||||
The response of the `_rank_eval` endpoint contains the overall calculated result
|
||||
for the defined quality metric, a `details` section with a breakdown of results
|
||||
for each query in the test suite and an optional `failures` section that shows
|
||||
The response of the `_rank_eval` endpoint contains the overall calculated result
|
||||
for the defined quality metric, a `details` section with a breakdown of results
|
||||
for each query in the test suite and an optional `failures` section that shows
|
||||
potential errors of individual queries. The response has the following format:
|
||||
|
||||
[source,js]
|
||||
|
|
|
@ -186,5 +186,5 @@ The API returns the following result:
|
|||
// TESTRESPONSE[s/0TvkCyF7TAmM1wHP4a42-A/$body.shards.1.0.allocation_id.id/]
|
||||
// TESTRESPONSE[s/fMju3hd1QHWmWrIgFnI4Ww/$body.shards.0.0.allocation_id.id/]
|
||||
|
||||
Because of the specified routing values,
|
||||
Because of the specified routing values,
|
||||
the search is only executed against two of the shards.
|
||||
|
|
|
@ -29,17 +29,17 @@ GET _search/template
|
|||
[[search-template-api-desc]]
|
||||
==== {api-description-title}
|
||||
|
||||
The `/_search/template` endpoint allows you to use the mustache language to pre-
|
||||
render search requests, before they are executed and fill existing templates
|
||||
The `/_search/template` endpoint allows you to use the mustache language to pre-
|
||||
render search requests, before they are executed and fill existing templates
|
||||
with template parameters.
|
||||
|
||||
For more information on how Mustache templating and what kind of templating you
|
||||
can do with it check out the https://mustache.github.io/mustache.5.html[online
|
||||
documentation of the mustache project].
|
||||
|
||||
NOTE: The mustache language is implemented in {es} as a sandboxed scripting
|
||||
language, hence it obeys settings that may be used to enable or disable scripts
|
||||
per type and context as described in the
|
||||
NOTE: The mustache language is implemented in {es} as a sandboxed scripting
|
||||
language, hence it obeys settings that may be used to enable or disable scripts
|
||||
per type and context as described in the
|
||||
<<allowed-script-types-setting, scripting docs>>.
|
||||
|
||||
|
||||
|
@ -57,17 +57,17 @@ include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=allow-no-indices]
|
|||
Defaults to `true`.
|
||||
|
||||
`ccs_minimize_roundtrips`::
|
||||
(Optional, boolean) If `true`, network round-trips are minimized for
|
||||
(Optional, boolean) If `true`, network round-trips are minimized for
|
||||
cross-cluster search requests. Defaults to `true`.
|
||||
|
||||
include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=expand-wildcards]
|
||||
|
||||
`explain`::
|
||||
(Optional, boolean) If `true`, the response includes additional details about
|
||||
(Optional, boolean) If `true`, the response includes additional details about
|
||||
score computation as part of a hit. Defaults to `false`.
|
||||
|
||||
`ignore_throttled`::
|
||||
(Optional, boolean) If `true`, specified concrete, expanded or aliased indices
|
||||
(Optional, boolean) If `true`, specified concrete, expanded or aliased indices
|
||||
are not included in the response when throttled. Defaults to `true`.
|
||||
|
||||
include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=index-ignore-unavailable]
|
||||
|
@ -75,11 +75,11 @@ include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=index-ignore-unavailab
|
|||
include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=preference]
|
||||
|
||||
`profile`::
|
||||
(Optional, boolean) If `true`, the query execution is profiled. Defaults
|
||||
(Optional, boolean) If `true`, the query execution is profiled. Defaults
|
||||
to `false`.
|
||||
|
||||
`rest_total_hits_as_int`::
|
||||
(Optional, boolean) If `true`, `hits.total` are rendered as an integer in
|
||||
(Optional, boolean) If `true`, `hits.total` are rendered as an integer in
|
||||
the response. Defaults to `false`.
|
||||
|
||||
include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=routing]
|
||||
|
@ -89,9 +89,9 @@ include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=scroll]
|
|||
include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=search_type]
|
||||
|
||||
`typed_keys`::
|
||||
(Optional, boolean) If `true`, aggregation and suggester names are
|
||||
(Optional, boolean) If `true`, aggregation and suggester names are
|
||||
prefixed by their respective types in the response. Defaults to `false`.
|
||||
|
||||
|
||||
|
||||
[[search-template-api-request-body]]
|
||||
==== {api-request-body-title}
|
||||
|
@ -128,7 +128,7 @@ POST _scripts/<templateid>
|
|||
|
||||
//////////////////////////
|
||||
|
||||
The API returns the following result if the template has been successfully
|
||||
The API returns the following result if the template has been successfully
|
||||
created:
|
||||
|
||||
[source,console-result]
|
||||
|
@ -198,7 +198,7 @@ GET _search/template
|
|||
[[_validating_templates]]
|
||||
==== Validating a search template
|
||||
|
||||
A template can be rendered in a response with given parameters by using the
|
||||
A template can be rendered in a response with given parameters by using the
|
||||
following request:
|
||||
|
||||
[source,console]
|
||||
|
@ -603,7 +603,7 @@ query as a string instead:
|
|||
===== Encoding URLs
|
||||
|
||||
The `{{#url}}value{{/url}}` function can be used to encode a string value
|
||||
in a HTML encoding form as defined in by the
|
||||
in a HTML encoding form as defined in by the
|
||||
https://www.w3.org/TR/html4/[HTML specification].
|
||||
|
||||
As an example, it is useful to encode a URL:
|
||||
|
@ -657,7 +657,7 @@ Allows to execute several search template requests.
|
|||
[[multi-search-template-api-desc]]
|
||||
==== {api-description-title}
|
||||
|
||||
Allows to execute several search template requests within the same API using the
|
||||
Allows to execute several search template requests within the same API using the
|
||||
`_msearch/template` endpoint.
|
||||
|
||||
The format of the request is similar to the <<search-multi-search, Multi
|
||||
|
@ -672,10 +672,10 @@ body\n
|
|||
--------------------------------------------------
|
||||
// NOTCONSOLE
|
||||
|
||||
The header part supports the same `index`, `search_type`, `preference`, and
|
||||
The header part supports the same `index`, `search_type`, `preference`, and
|
||||
`routing` options as the Multi Search API.
|
||||
|
||||
The body includes a search template body request and supports inline, stored and
|
||||
The body includes a search template body request and supports inline, stored and
|
||||
file templates.
|
||||
|
||||
|
||||
|
@ -702,5 +702,5 @@ $ curl -H "Content-Type: application/x-ndjson" -XGET localhost:9200/_msearch/tem
|
|||
The response returns a `responses` array, which includes the search template
|
||||
response for each search template request matching its order in the original
|
||||
multi search template request. If there was a complete failure for that specific
|
||||
search template request, an object with `error` message will be returned in
|
||||
search template request, an object with `error` message will be returned in
|
||||
place of the actual search response.
|
||||
|
|
|
@ -20,7 +20,7 @@ GET my-index-000001/_validate/query?q=user.id:kimchy
|
|||
==== {api-description-title}
|
||||
|
||||
The validate API allows you to validate a potentially expensive query
|
||||
without executing it. The query can be sent either as a path parameter or in the
|
||||
without executing it. The query can be sent either as a path parameter or in the
|
||||
request body.
|
||||
|
||||
|
||||
|
@ -36,13 +36,13 @@ To search all data streams or indices in a cluster, omit this parameter or use
|
|||
`_all` or `*`.
|
||||
|
||||
include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=query]
|
||||
|
||||
|
||||
|
||||
[[search-validate-api-query-params]]
|
||||
==== {api-query-parms-title}
|
||||
|
||||
`all_shards`::
|
||||
(Optional, boolean) If `true`, the validation is executed on all shards
|
||||
(Optional, boolean) If `true`, the validation is executed on all shards
|
||||
instead of one random shard per index. Defaults to `false`.
|
||||
|
||||
include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=allow-no-indices]
|
||||
|
@ -60,7 +60,7 @@ include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=df]
|
|||
include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=expand-wildcards]
|
||||
|
||||
`explain`::
|
||||
(Optional, boolean) If `true`, the response returns detailed information if an
|
||||
(Optional, boolean) If `true`, the response returns detailed information if an
|
||||
error has occurred. Defaults to `false`.
|
||||
|
||||
include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=index-ignore-unavailable]
|
||||
|
@ -68,7 +68,7 @@ include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=index-ignore-unavailab
|
|||
include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=lenient]
|
||||
|
||||
`rewrite`::
|
||||
(Optional, boolean) If `true`, returns a more detailed explanation showing the
|
||||
(Optional, boolean) If `true`, returns a more detailed explanation showing the
|
||||
actual Lucene query that will be executed. Defaults to `false`.
|
||||
|
||||
include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=search-q]
|
||||
|
@ -129,8 +129,8 @@ GET my-index-000001/_validate/query
|
|||
NOTE: The query being sent in the body must be nested in a `query` key, same as
|
||||
the <<search-search,search api>> works
|
||||
|
||||
If the query is invalid, `valid` will be `false`. Here the query is invalid
|
||||
because {es} knows the `post_date` field should be a date due to dynamic
|
||||
If the query is invalid, `valid` will be `false`. Here the query is invalid
|
||||
because {es} knows the `post_date` field should be a date due to dynamic
|
||||
mapping, and 'foo' does not correctly parse into a date:
|
||||
|
||||
[source,console]
|
||||
|
@ -154,7 +154,7 @@ GET my-index-000001/_validate/query
|
|||
|
||||
===== The explain parameter
|
||||
|
||||
An `explain` parameter can be specified to get more detailed information about
|
||||
An `explain` parameter can be specified to get more detailed information about
|
||||
why a query failed:
|
||||
|
||||
[source,console]
|
||||
|
@ -194,8 +194,8 @@ The API returns the following response:
|
|||
|
||||
===== The rewrite parameter
|
||||
|
||||
When the query is valid, the explanation defaults to the string representation
|
||||
of that query. With `rewrite` set to `true`, the explanation is more detailed
|
||||
When the query is valid, the explanation defaults to the string representation
|
||||
of that query. With `rewrite` set to `true`, the explanation is more detailed
|
||||
showing the actual Lucene query that will be executed.
|
||||
|
||||
[source,console]
|
||||
|
|
|
@ -139,6 +139,11 @@ public class KibanaPlugin extends Plugin implements SystemIndexPlugin {
|
|||
return "kibana_" + super.getName();
|
||||
}
|
||||
|
||||
@Override
|
||||
public boolean allowSystemIndexAccessByDefault() {
|
||||
return true;
|
||||
}
|
||||
|
||||
@Override
|
||||
public List<Route> routes() {
|
||||
return Collections.unmodifiableList(
|
||||
|
|
|
@ -45,6 +45,8 @@ public class RestMultiSearchTemplateActionTests extends RestActionTestCase {
|
|||
.withPath("/some_index/some_type/_msearch/template")
|
||||
.withContent(bytesContent, XContentType.JSON)
|
||||
.build();
|
||||
// We're not actually testing anything to do with the client, but need to set this so it doesn't fail the test for being unset.
|
||||
verifyingClient.setExecuteVerifier((arg1, arg2) -> null);
|
||||
|
||||
dispatchRequest(request);
|
||||
assertWarnings(RestMultiSearchTemplateAction.TYPES_DEPRECATION_MESSAGE);
|
||||
|
@ -59,6 +61,8 @@ public class RestMultiSearchTemplateActionTests extends RestActionTestCase {
|
|||
.withPath("/some_index/_msearch/template")
|
||||
.withContent(bytesContent, XContentType.JSON)
|
||||
.build();
|
||||
// We're not actually testing anything to do with the client, but need to set this so it doesn't fail the test for being unset.
|
||||
verifyingClient.setExecuteVerifier((arg1, arg2) -> null);
|
||||
|
||||
dispatchRequest(request);
|
||||
assertWarnings(RestMultiSearchTemplateAction.TYPES_DEPRECATION_MESSAGE);
|
||||
|
|
|
@ -36,6 +36,7 @@ import org.elasticsearch.common.bytes.BytesReference;
|
|||
import org.elasticsearch.common.settings.ClusterSettings;
|
||||
import org.elasticsearch.common.settings.Settings;
|
||||
import org.elasticsearch.indices.SystemIndices;
|
||||
import org.elasticsearch.common.util.concurrent.ThreadContext;
|
||||
import org.elasticsearch.test.ESTestCase;
|
||||
|
||||
import java.util.HashMap;
|
||||
|
@ -61,7 +62,8 @@ public class ReindexSourceTargetValidationTests extends ESTestCase {
|
|||
.put(index("baz"), true)
|
||||
.put(index("source", "source_multi"), true)
|
||||
.put(index("source2", "source_multi"), true)).build();
|
||||
private static final IndexNameExpressionResolver INDEX_NAME_EXPRESSION_RESOLVER = new IndexNameExpressionResolver();
|
||||
private static final IndexNameExpressionResolver INDEX_NAME_EXPRESSION_RESOLVER =
|
||||
new IndexNameExpressionResolver(new ThreadContext(Settings.EMPTY));
|
||||
private static final AutoCreateIndex AUTO_CREATE_INDEX = new AutoCreateIndex(Settings.EMPTY,
|
||||
new ClusterSettings(Settings.EMPTY, ClusterSettings.BUILT_IN_CLUSTER_SETTINGS), INDEX_NAME_EXPRESSION_RESOLVER,
|
||||
new SystemIndices(new HashMap<>()));
|
||||
|
|
|
@ -24,8 +24,8 @@ import org.elasticsearch.rest.RestRequest;
|
|||
import org.elasticsearch.rest.action.search.RestSearchAction;
|
||||
import org.elasticsearch.test.rest.FakeRestRequest;
|
||||
import org.elasticsearch.test.rest.RestActionTestCase;
|
||||
|
||||
import org.junit.Before;
|
||||
|
||||
import java.io.IOException;
|
||||
|
||||
import static java.util.Collections.emptyList;
|
||||
|
@ -44,6 +44,10 @@ public class RestDeleteByQueryActionTests extends RestActionTestCase {
|
|||
.withMethod(RestRequest.Method.POST)
|
||||
.withPath("/some_index/some_type/_delete_by_query")
|
||||
.build();
|
||||
|
||||
// We're not actually testing anything to do with the client, but need to set this so it doesn't fail the test for being unset.
|
||||
verifyingClient.setExecuteLocallyVerifier((arg1, arg2) -> null);
|
||||
|
||||
dispatchRequest(request);
|
||||
|
||||
// checks the type in the URL is propagated correctly to the request object
|
||||
|
|
|
@ -31,8 +31,8 @@ import org.elasticsearch.test.rest.RestActionTestCase;
|
|||
import org.junit.Before;
|
||||
|
||||
import java.io.IOException;
|
||||
import java.util.Collections;
|
||||
import java.util.Arrays;
|
||||
import java.util.Collections;
|
||||
|
||||
import static java.util.Collections.singletonMap;
|
||||
|
||||
|
@ -102,6 +102,10 @@ public class RestReindexActionTests extends RestActionTestCase {
|
|||
}
|
||||
b.endObject();
|
||||
requestBuilder.withContent(new BytesArray(BytesReference.bytes(b).toBytesRef()), XContentType.JSON);
|
||||
|
||||
// We're not actually testing anything to do with the client, but need to set this so it doesn't fail the test for being unset.
|
||||
verifyingClient.setExecuteLocallyVerifier((arg1, arg2) -> null);
|
||||
|
||||
dispatchRequest(requestBuilder.build());
|
||||
assertWarnings(ReindexRequest.TYPES_DEPRECATION_MESSAGE);
|
||||
}
|
||||
|
@ -123,6 +127,10 @@ public class RestReindexActionTests extends RestActionTestCase {
|
|||
}
|
||||
b.endObject();
|
||||
requestBuilder.withContent(new BytesArray(BytesReference.bytes(b).toBytesRef()), XContentType.JSON);
|
||||
|
||||
// We're not actually testing anything to do with the client, but need to set this so it doesn't fail the test for being unset.
|
||||
verifyingClient.setExecuteLocallyVerifier((arg1, arg2) -> null);
|
||||
|
||||
dispatchRequest(requestBuilder.build());
|
||||
assertWarnings(ReindexRequest.TYPES_DEPRECATION_MESSAGE);
|
||||
}
|
||||
|
|
|
@ -24,8 +24,8 @@ import org.elasticsearch.rest.RestRequest;
|
|||
import org.elasticsearch.rest.action.search.RestSearchAction;
|
||||
import org.elasticsearch.test.rest.FakeRestRequest;
|
||||
import org.elasticsearch.test.rest.RestActionTestCase;
|
||||
|
||||
import org.junit.Before;
|
||||
|
||||
import java.io.IOException;
|
||||
|
||||
import static java.util.Collections.emptyList;
|
||||
|
@ -45,6 +45,10 @@ public class RestUpdateByQueryActionTests extends RestActionTestCase {
|
|||
.withMethod(RestRequest.Method.POST)
|
||||
.withPath("/some_index/some_type/_update_by_query")
|
||||
.build();
|
||||
|
||||
// We're not actually testing anything to do with the client, but need to set this so it doesn't fail the test for being unset.
|
||||
verifyingClient.setExecuteLocallyVerifier((arg1, arg2) -> null);
|
||||
|
||||
dispatchRequest(request);
|
||||
|
||||
// checks the type in the URL is propagated correctly to the request object
|
||||
|
|
|
@ -74,6 +74,8 @@
|
|||
|
||||
---
|
||||
"Rethrottle to -1 which turns off throttling":
|
||||
- skip:
|
||||
features: warnings
|
||||
# Throttling happens between each scroll batch so we need to control the size of the batch by using a single shard
|
||||
# and a small batch size on the request
|
||||
- do:
|
||||
|
@ -95,6 +97,7 @@
|
|||
index: test
|
||||
body: { "text": "test" }
|
||||
- do:
|
||||
|
||||
indices.refresh: {}
|
||||
|
||||
- do:
|
||||
|
@ -121,6 +124,8 @@
|
|||
task_id: $task
|
||||
|
||||
- do:
|
||||
warnings:
|
||||
- "this request accesses system indices: [.tasks], but in a future major version, direct access to system indices will be prevented by default"
|
||||
indices.refresh: {}
|
||||
|
||||
- do:
|
||||
|
|
|
@ -62,6 +62,8 @@
|
|||
|
||||
---
|
||||
"Multiple slices with wait_for_completion=false":
|
||||
- skip:
|
||||
features: warnings
|
||||
- do:
|
||||
index:
|
||||
index: test
|
||||
|
@ -151,8 +153,12 @@
|
|||
|
||||
# Only the "parent" reindex task wrote its status to the tasks index though
|
||||
- do:
|
||||
warnings:
|
||||
- "this request accesses system indices: [.tasks], but in a future major version, direct access to system indices will be prevented by default"
|
||||
indices.refresh: {}
|
||||
- do:
|
||||
warnings:
|
||||
- "this request accesses system indices: [.tasks], but in a future major version, direct access to system indices will be prevented by default"
|
||||
search:
|
||||
rest_total_hits_as_int: true
|
||||
index: .tasks
|
||||
|
@ -165,6 +171,8 @@
|
|||
|
||||
---
|
||||
"Multiple slices with rethrottle":
|
||||
- skip:
|
||||
features: warnings
|
||||
- do:
|
||||
index:
|
||||
index: test
|
||||
|
@ -196,7 +204,8 @@
|
|||
id: 6
|
||||
body: { "text": "test" }
|
||||
- do:
|
||||
indices.refresh: {}
|
||||
indices.refresh:
|
||||
index: test
|
||||
|
||||
# Start the task with a requests_per_second that should make it take a very long time
|
||||
- do:
|
||||
|
@ -259,8 +268,12 @@
|
|||
|
||||
# Only the "parent" reindex task wrote its status to the tasks index though
|
||||
- do:
|
||||
warnings:
|
||||
- "this request accesses system indices: [.tasks], but in a future major version, direct access to system indices will be prevented by default"
|
||||
indices.refresh: {}
|
||||
- do:
|
||||
warnings:
|
||||
- "this request accesses system indices: [.tasks], but in a future major version, direct access to system indices will be prevented by default"
|
||||
search:
|
||||
rest_total_hits_as_int: true
|
||||
index: .tasks
|
||||
|
|
|
@ -58,6 +58,8 @@
|
|||
|
||||
---
|
||||
"Multiple slices with wait_for_completion=false":
|
||||
- skip:
|
||||
features: warnings
|
||||
- do:
|
||||
index:
|
||||
index: source
|
||||
|
@ -160,8 +162,12 @@
|
|||
|
||||
# Only the "parent" reindex task wrote its status to the tasks index though
|
||||
- do:
|
||||
warnings:
|
||||
- "this request accesses system indices: [.tasks], but in a future major version, direct access to system indices will be prevented by default"
|
||||
indices.refresh: {}
|
||||
- do:
|
||||
warnings:
|
||||
- "this request accesses system indices: [.tasks], but in a future major version, direct access to system indices will be prevented by default"
|
||||
search:
|
||||
rest_total_hits_as_int: true
|
||||
index: .tasks
|
||||
|
@ -170,6 +176,8 @@
|
|||
|
||||
---
|
||||
"Multiple slices with rethrottle":
|
||||
- skip:
|
||||
features: warnings
|
||||
- do:
|
||||
index:
|
||||
index: source
|
||||
|
@ -272,8 +280,12 @@
|
|||
|
||||
# Only the "parent" reindex task wrote its status to the tasks index though
|
||||
- do:
|
||||
warnings:
|
||||
- "this request accesses system indices: [.tasks], but in a future major version, direct access to system indices will be prevented by default"
|
||||
indices.refresh: {}
|
||||
- do:
|
||||
warnings:
|
||||
- "this request accesses system indices: [.tasks], but in a future major version, direct access to system indices will be prevented by default"
|
||||
search:
|
||||
rest_total_hits_as_int: true
|
||||
index: .tasks
|
||||
|
|
|
@ -54,6 +54,8 @@
|
|||
|
||||
---
|
||||
"Multiple slices with wait_for_completion=false":
|
||||
- skip:
|
||||
features: warnings
|
||||
- do:
|
||||
index:
|
||||
index: test
|
||||
|
@ -143,8 +145,12 @@
|
|||
|
||||
# Only the "parent" reindex task wrote its status to the tasks index though
|
||||
- do:
|
||||
warnings:
|
||||
- "this request accesses system indices: [.tasks], but in a future major version, direct access to system indices will be prevented by default"
|
||||
indices.refresh: {}
|
||||
- do:
|
||||
warnings:
|
||||
- "this request accesses system indices: [.tasks], but in a future major version, direct access to system indices will be prevented by default"
|
||||
search:
|
||||
rest_total_hits_as_int: true
|
||||
index: .tasks
|
||||
|
@ -152,6 +158,8 @@
|
|||
|
||||
---
|
||||
"Multiple slices with rethrottle":
|
||||
- skip:
|
||||
features: warnings
|
||||
- do:
|
||||
index:
|
||||
index: test
|
||||
|
@ -246,8 +254,12 @@
|
|||
|
||||
# Only the "parent" reindex task wrote its status to the tasks index though
|
||||
- do:
|
||||
warnings:
|
||||
- "this request accesses system indices: [.tasks], but in a future major version, direct access to system indices will be prevented by default"
|
||||
indices.refresh: {}
|
||||
- do:
|
||||
warnings:
|
||||
- "this request accesses system indices: [.tasks], but in a future major version, direct access to system indices will be prevented by default"
|
||||
search:
|
||||
rest_total_hits_as_int: true
|
||||
index: .tasks
|
||||
|
|
|
@ -42,6 +42,7 @@ import org.elasticsearch.rest.action.document.RestIndexAction;
|
|||
import org.elasticsearch.rest.action.document.RestUpdateAction;
|
||||
import org.elasticsearch.rest.action.search.RestExplainAction;
|
||||
import org.elasticsearch.test.NotEqualMessageBuilder;
|
||||
import org.elasticsearch.test.XContentTestUtils;
|
||||
import org.elasticsearch.test.rest.ESRestTestCase;
|
||||
import org.elasticsearch.test.rest.yaml.ObjectPath;
|
||||
import org.junit.Before;
|
||||
|
@ -62,6 +63,7 @@ import java.util.regex.Pattern;
|
|||
import static java.util.Collections.emptyMap;
|
||||
import static java.util.Collections.singletonList;
|
||||
import static java.util.Collections.singletonMap;
|
||||
import static org.elasticsearch.cluster.metadata.IndexNameExpressionResolver.SYSTEM_INDEX_ENFORCEMENT_VERSION;
|
||||
import static org.elasticsearch.cluster.routing.UnassignedInfo.INDEX_DELAYED_NODE_LEFT_TIMEOUT_SETTING;
|
||||
import static org.elasticsearch.cluster.routing.allocation.decider.MaxRetryAllocationDecider.SETTING_ALLOCATION_MAX_RETRY;
|
||||
import static org.elasticsearch.common.xcontent.XContentFactory.jsonBuilder;
|
||||
|
@ -69,6 +71,7 @@ import static org.hamcrest.Matchers.containsString;
|
|||
import static org.hamcrest.Matchers.equalTo;
|
||||
import static org.hamcrest.Matchers.greaterThan;
|
||||
import static org.hamcrest.Matchers.greaterThanOrEqualTo;
|
||||
import static org.hamcrest.Matchers.hasKey;
|
||||
import static org.hamcrest.Matchers.hasSize;
|
||||
import static org.hamcrest.Matchers.is;
|
||||
import static org.hamcrest.Matchers.notNullValue;
|
||||
|
@ -342,7 +345,7 @@ public class FullClusterRestartIT extends AbstractFullClusterRestartTestCase {
|
|||
shrinkIndexRequest.setJsonEntity("{\"settings\": {\"index.number_of_shards\": 1}}");
|
||||
client().performRequest(shrinkIndexRequest);
|
||||
|
||||
client().performRequest(new Request("POST", "/_refresh"));
|
||||
refreshAllIndices();
|
||||
} else {
|
||||
numDocs = countOfIndexedRandomDocuments();
|
||||
}
|
||||
|
@ -427,7 +430,7 @@ public class FullClusterRestartIT extends AbstractFullClusterRestartTestCase {
|
|||
numDocs = countOfIndexedRandomDocuments();
|
||||
}
|
||||
|
||||
client().performRequest(new Request("POST", "/_refresh"));
|
||||
refreshAllIndices();
|
||||
|
||||
Map<?, ?> response = entityAsMap(client().performRequest(new Request("GET", "/" + index + "/_search")));
|
||||
assertNoFailures(response);
|
||||
|
@ -1457,73 +1460,108 @@ public class FullClusterRestartIT extends AbstractFullClusterRestartTestCase {
|
|||
assertTotalHits(numDocs, entityAsMap(client().performRequest(new Request("GET", "/" + index + "/_search"))));
|
||||
}
|
||||
|
||||
public void testCreateSystemIndexInOldVersion() throws Exception {
|
||||
assumeTrue("only run on old cluster", isRunningAgainstOldCluster());
|
||||
// create index
|
||||
Request createTestIndex = new Request("PUT", "/test_index_old");
|
||||
createTestIndex.setJsonEntity("{\"settings\": {\"index.number_of_replicas\": 0, \"index.number_of_shards\": 1}}");
|
||||
client().performRequest(createTestIndex);
|
||||
@SuppressWarnings("unchecked")
|
||||
public void testSystemIndexMetadataIsUpgraded() throws Exception {
|
||||
final String systemIndexWarning = "this request accesses system indices: [.tasks], but in a future major version, direct " +
|
||||
"access to system indices will be prevented by default";
|
||||
if (isRunningAgainstOldCluster()) {
|
||||
// create index
|
||||
Request createTestIndex = new Request("PUT", "/test_index_old");
|
||||
createTestIndex.setJsonEntity("{\"settings\": {\"index.number_of_replicas\": 0, \"index.number_of_shards\": 1}}");
|
||||
client().performRequest(createTestIndex);
|
||||
|
||||
Request bulk = new Request("POST", "/_bulk");
|
||||
bulk.addParameter("refresh", "true");
|
||||
bulk.setJsonEntity("{\"index\": {\"_index\": \"test_index_old\", \"_type\" : \"_doc\"}}\n" +
|
||||
"{\"f1\": \"v1\", \"f2\": \"v2\"}\n");
|
||||
Request bulk = new Request("POST", "/_bulk");
|
||||
bulk.addParameter("refresh", "true");
|
||||
bulk.setJsonEntity("{\"index\": {\"_index\": \"test_index_old\", \"_type\" : \"_doc\"}}\n" +
|
||||
"{\"f1\": \"v1\", \"f2\": \"v2\"}\n");
|
||||
if (isRunningAgainstAncientCluster() == false) {
|
||||
bulk.setOptions(expectWarnings(RestBulkAction.TYPES_DEPRECATION_MESSAGE));
|
||||
}
|
||||
client().performRequest(bulk);
|
||||
client().performRequest(bulk);
|
||||
|
||||
// start a async reindex job
|
||||
Request reindex = new Request("POST", "/_reindex");
|
||||
reindex.setJsonEntity(
|
||||
"{\n" +
|
||||
" \"source\":{\n" +
|
||||
" \"index\":\"test_index_old\"\n" +
|
||||
" },\n" +
|
||||
" \"dest\":{\n" +
|
||||
" \"index\":\"test_index_reindex\"\n" +
|
||||
" }\n" +
|
||||
"}");
|
||||
reindex.addParameter("wait_for_completion", "false");
|
||||
Map<String, Object> response = entityAsMap(client().performRequest(reindex));
|
||||
String taskId = (String) response.get("task");
|
||||
// start a async reindex job
|
||||
Request reindex = new Request("POST", "/_reindex");
|
||||
reindex.setJsonEntity(
|
||||
"{\n" +
|
||||
" \"source\":{\n" +
|
||||
" \"index\":\"test_index_old\"\n" +
|
||||
" },\n" +
|
||||
" \"dest\":{\n" +
|
||||
" \"index\":\"test_index_reindex\"\n" +
|
||||
" }\n" +
|
||||
"}");
|
||||
reindex.addParameter("wait_for_completion", "false");
|
||||
Map<String, Object> response = entityAsMap(client().performRequest(reindex));
|
||||
String taskId = (String) response.get("task");
|
||||
|
||||
// wait for task
|
||||
Request getTask = new Request("GET", "/_tasks/" + taskId);
|
||||
getTask.addParameter("wait_for_completion", "true");
|
||||
client().performRequest(getTask);
|
||||
// wait for task
|
||||
Request getTask = new Request("GET", "/_tasks/" + taskId);
|
||||
getTask.addParameter("wait_for_completion", "true");
|
||||
client().performRequest(getTask);
|
||||
|
||||
// make sure .tasks index exists
|
||||
assertBusy(() -> {
|
||||
// make sure .tasks index exists
|
||||
Request getTasksIndex = new Request("GET", "/.tasks");
|
||||
getTasksIndex.addParameter("allow_no_indices", "false");
|
||||
if (getOldClusterVersion().onOrAfter(Version.V_6_7_0) && getOldClusterVersion().before(Version.V_7_0_0)) {
|
||||
getTasksIndex.addParameter("include_type_name", "false");
|
||||
}
|
||||
assertThat(client().performRequest(getTasksIndex).getStatusLine().getStatusCode(), is(200));
|
||||
});
|
||||
}
|
||||
|
||||
@SuppressWarnings("unchecked" +
|
||||
"")
|
||||
public void testSystemIndexGetsUpdatedMetadata() throws Exception {
|
||||
assumeFalse("only run in upgraded cluster", isRunningAgainstOldCluster());
|
||||
getTasksIndex.setOptions(expectVersionSpecificWarnings(v -> {
|
||||
v.current(systemIndexWarning);
|
||||
v.compatible(systemIndexWarning);
|
||||
}));
|
||||
assertBusy(() -> {
|
||||
try {
|
||||
assertThat(client().performRequest(getTasksIndex).getStatusLine().getStatusCode(), is(200));
|
||||
} catch (ResponseException e) {
|
||||
throw new AssertionError(".tasks index does not exist yet");
|
||||
}
|
||||
});
|
||||
|
||||
assertBusy(() -> {
|
||||
Request clusterStateRequest = new Request("GET", "/_cluster/state/metadata");
|
||||
Map<String, Object> response = entityAsMap(client().performRequest(clusterStateRequest));
|
||||
Map<String, Object> metadata = (Map<String, Object>) response.get("metadata");
|
||||
assertNotNull(metadata);
|
||||
Map<String, Object> indices = (Map<String, Object>) metadata.get("indices");
|
||||
assertNotNull(indices);
|
||||
// If we are on 7.x create an alias that includes both a system index and a non-system index so we can be sure it gets
|
||||
// upgraded properly. If we're already on 8.x, skip this part of the test.
|
||||
if (minimumNodeVersion().before(SYSTEM_INDEX_ENFORCEMENT_VERSION)) {
|
||||
// Create an alias to make sure it gets upgraded properly
|
||||
Request putAliasRequest = new Request("POST", "/_aliases");
|
||||
putAliasRequest.setJsonEntity("{\n" +
|
||||
" \"actions\": [\n" +
|
||||
" {\"add\": {\"index\": \".tasks\", \"alias\": \"test-system-alias\"}},\n" +
|
||||
" {\"add\": {\"index\": \"test_index_reindex\", \"alias\": \"test-system-alias\"}}\n" +
|
||||
" ]\n" +
|
||||
"}");
|
||||
assertThat(client().performRequest(putAliasRequest).getStatusLine().getStatusCode(), is(200));
|
||||
}
|
||||
} else {
|
||||
assertBusy(() -> {
|
||||
Request clusterStateRequest = new Request("GET", "/_cluster/state/metadata");
|
||||
Map<String, Object> indices = new XContentTestUtils.JsonMapView(entityAsMap(client().performRequest(clusterStateRequest)))
|
||||
.get("metadata.indices");
|
||||
|
||||
Map<String, Object> tasksIndex = (Map<String, Object>) indices.get(".tasks");
|
||||
assertNotNull(tasksIndex);
|
||||
assertThat(tasksIndex.get("system"), is(true));
|
||||
// Make sure our non-system index is still non-system
|
||||
assertThat(new XContentTestUtils.JsonMapView(indices).get("test_index_old.system"), is(false));
|
||||
|
||||
Map<String, Object> testIndex = (Map<String, Object>) indices.get("test_index_old");
|
||||
assertNotNull(testIndex);
|
||||
assertThat(testIndex.get("system"), is(false));
|
||||
});
|
||||
// Can't get the .tasks index via JsonMapView because it splits on `.`
|
||||
assertThat(indices, hasKey(".tasks"));
|
||||
XContentTestUtils.JsonMapView tasksIndex = new XContentTestUtils.JsonMapView((Map<String, Object>) indices.get(".tasks"));
|
||||
assertThat(tasksIndex.get("system"), is(true));
|
||||
|
||||
// If .tasks was created in a 7.x version, it should have an alias on it that we need to make sure got upgraded properly.
|
||||
final String tasksCreatedVersionString = tasksIndex.get("settings.index.version.created");
|
||||
assertThat(tasksCreatedVersionString, notNullValue());
|
||||
final Version tasksCreatedVersion = Version.fromId(Integer.parseInt(tasksCreatedVersionString));
|
||||
if (tasksCreatedVersion.before(SYSTEM_INDEX_ENFORCEMENT_VERSION)) {
|
||||
// Verify that the alias survived the upgrade
|
||||
Request getAliasRequest = new Request("GET", "/_alias/test-system-alias");
|
||||
getAliasRequest.setOptions(expectVersionSpecificWarnings(v -> {
|
||||
v.current(systemIndexWarning);
|
||||
v.compatible(systemIndexWarning);
|
||||
}));
|
||||
Map<String, Object> aliasResponse = entityAsMap(client().performRequest(getAliasRequest));
|
||||
assertThat(aliasResponse, hasKey(".tasks"));
|
||||
assertThat(aliasResponse, hasKey("test_index_reindex"));
|
||||
}
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
public void testEnableSoftDeletesOnRestore() throws Exception {
|
||||
|
|
|
@ -21,84 +21,121 @@ package org.elasticsearch.upgrades;
|
|||
|
||||
import org.elasticsearch.Version;
|
||||
import org.elasticsearch.client.Request;
|
||||
import org.elasticsearch.client.ResponseException;
|
||||
import org.elasticsearch.test.XContentTestUtils.JsonMapView;
|
||||
|
||||
import java.util.Map;
|
||||
|
||||
import static org.elasticsearch.cluster.metadata.IndexNameExpressionResolver.SYSTEM_INDEX_ENFORCEMENT_VERSION;
|
||||
import static org.hamcrest.Matchers.hasKey;
|
||||
import static org.hamcrest.Matchers.is;
|
||||
import static org.hamcrest.Matchers.notNullValue;
|
||||
|
||||
public class SystemIndicesUpgradeIT extends AbstractRollingTestCase {
|
||||
|
||||
public void testOldDoesntHaveSystemIndexMetadata() throws Exception {
|
||||
assumeTrue("only run in old cluster", CLUSTER_TYPE == ClusterType.OLD);
|
||||
// create index
|
||||
Request createTestIndex = new Request("PUT", "/test_index_old");
|
||||
createTestIndex.setJsonEntity("{\"settings\": {\"index.number_of_replicas\": 0, \"index.number_of_shards\": 1}}");
|
||||
client().performRequest(createTestIndex);
|
||||
@SuppressWarnings("unchecked")
|
||||
public void testSystemIndicesUpgrades() throws Exception {
|
||||
final String systemIndexWarning = "this request accesses system indices: [.tasks], but in a future major version, direct " +
|
||||
"access to system indices will be prevented by default";
|
||||
if (CLUSTER_TYPE == ClusterType.OLD) {
|
||||
// create index
|
||||
Request createTestIndex = new Request("PUT", "/test_index_old");
|
||||
createTestIndex.setJsonEntity("{\"settings\": {\"index.number_of_shards\": 1, \"index.number_of_replicas\": 0}}");
|
||||
client().performRequest(createTestIndex);
|
||||
|
||||
Request bulk = new Request("POST", "/_bulk");
|
||||
bulk.addParameter("refresh", "true");
|
||||
if (UPGRADE_FROM_VERSION.before(Version.V_7_0_0)) {
|
||||
bulk.setJsonEntity("{\"index\": {\"_index\": \"test_index_old\", \"_type\" : \"_doc\"}}\n" +
|
||||
"{\"f1\": \"v1\", \"f2\": \"v2\"}\n");
|
||||
} else {
|
||||
bulk.setJsonEntity("{\"index\": {\"_index\": \"test_index_old\"}\n" +
|
||||
"{\"f1\": \"v1\", \"f2\": \"v2\"}\n");
|
||||
}
|
||||
client().performRequest(bulk);
|
||||
Request bulk = new Request("POST", "/_bulk");
|
||||
bulk.addParameter("refresh", "true");
|
||||
if (UPGRADE_FROM_VERSION.before(Version.V_7_0_0)) {
|
||||
bulk.setJsonEntity("{\"index\": {\"_index\": \"test_index_old\", \"_type\" : \"_doc\"}}\n" +
|
||||
"{\"f1\": \"v1\", \"f2\": \"v2\"}\n");
|
||||
} else {
|
||||
bulk.setJsonEntity("{\"index\": {\"_index\": \"test_index_old\"}\n" +
|
||||
"{\"f1\": \"v1\", \"f2\": \"v2\"}\n");
|
||||
}
|
||||
client().performRequest(bulk);
|
||||
|
||||
// start a async reindex job
|
||||
Request reindex = new Request("POST", "/_reindex");
|
||||
reindex.setJsonEntity(
|
||||
"{\n" +
|
||||
" \"source\":{\n" +
|
||||
" \"index\":\"test_index_old\"\n" +
|
||||
" },\n" +
|
||||
" \"dest\":{\n" +
|
||||
" \"index\":\"test_index_reindex\"\n" +
|
||||
" }\n" +
|
||||
"}");
|
||||
reindex.addParameter("wait_for_completion", "false");
|
||||
Map<String, Object> response = entityAsMap(client().performRequest(reindex));
|
||||
String taskId = (String) response.get("task");
|
||||
// start a async reindex job
|
||||
Request reindex = new Request("POST", "/_reindex");
|
||||
reindex.setJsonEntity(
|
||||
"{\n" +
|
||||
" \"source\":{\n" +
|
||||
" \"index\":\"test_index_old\"\n" +
|
||||
" },\n" +
|
||||
" \"dest\":{\n" +
|
||||
" \"index\":\"test_index_reindex\"\n" +
|
||||
" }\n" +
|
||||
"}");
|
||||
reindex.addParameter("wait_for_completion", "false");
|
||||
Map<String, Object> response = entityAsMap(client().performRequest(reindex));
|
||||
String taskId = (String) response.get("task");
|
||||
|
||||
// wait for task
|
||||
Request getTask = new Request("GET", "/_tasks/" + taskId);
|
||||
getTask.addParameter("wait_for_completion", "true");
|
||||
client().performRequest(getTask);
|
||||
// wait for task
|
||||
Request getTask = new Request("GET", "/_tasks/" + taskId);
|
||||
getTask.addParameter("wait_for_completion", "true");
|
||||
client().performRequest(getTask);
|
||||
|
||||
// make sure .tasks index exists
|
||||
assertBusy(() -> {
|
||||
// make sure .tasks index exists
|
||||
Request getTasksIndex = new Request("GET", "/.tasks");
|
||||
getTasksIndex.addParameter("allow_no_indices", "false");
|
||||
if (UPGRADE_FROM_VERSION.before(Version.V_7_0_0)) {
|
||||
getTasksIndex.addParameter("include_type_name", "false");
|
||||
}
|
||||
assertThat(client().performRequest(getTasksIndex).getStatusLine().getStatusCode(), is(200));
|
||||
});
|
||||
}
|
||||
|
||||
public void testMixedCluster() {
|
||||
assumeTrue("nothing to do in mixed cluster", CLUSTER_TYPE == ClusterType.MIXED);
|
||||
}
|
||||
getTasksIndex.setOptions(expectVersionSpecificWarnings(v -> {
|
||||
v.current(systemIndexWarning);
|
||||
v.compatible(systemIndexWarning);
|
||||
}));
|
||||
assertBusy(() -> {
|
||||
try {
|
||||
assertThat(client().performRequest(getTasksIndex).getStatusLine().getStatusCode(), is(200));
|
||||
} catch (ResponseException e) {
|
||||
throw new AssertionError(".tasks index does not exist yet");
|
||||
}
|
||||
});
|
||||
|
||||
@SuppressWarnings("unchecked")
|
||||
public void testUpgradedCluster() throws Exception {
|
||||
assumeTrue("only run on upgraded cluster", CLUSTER_TYPE == ClusterType.UPGRADED);
|
||||
// If we are on 7.x create an alias that includes both a system index and a non-system index so we can be sure it gets
|
||||
// upgraded properly. If we're already on 8.x, skip this part of the test.
|
||||
if (minimumNodeVersion().before(SYSTEM_INDEX_ENFORCEMENT_VERSION)) {
|
||||
// Create an alias to make sure it gets upgraded properly
|
||||
Request putAliasRequest = new Request("POST", "/_aliases");
|
||||
putAliasRequest.setJsonEntity("{\n" +
|
||||
" \"actions\": [\n" +
|
||||
" {\"add\": {\"index\": \".tasks\", \"alias\": \"test-system-alias\"}},\n" +
|
||||
" {\"add\": {\"index\": \"test_index_reindex\", \"alias\": \"test-system-alias\"}}\n" +
|
||||
" ]\n" +
|
||||
"}");
|
||||
assertThat(client().performRequest(putAliasRequest).getStatusLine().getStatusCode(), is(200));
|
||||
}
|
||||
} else if (CLUSTER_TYPE == ClusterType.UPGRADED) {
|
||||
assertBusy(() -> {
|
||||
Request clusterStateRequest = new Request("GET", "/_cluster/state/metadata");
|
||||
Map<String, Object> indices = new JsonMapView(entityAsMap(client().performRequest(clusterStateRequest)))
|
||||
.get("metadata.indices");
|
||||
|
||||
assertBusy(() -> {
|
||||
Request clusterStateRequest = new Request("GET", "/_cluster/state/metadata");
|
||||
Map<String, Object> response = entityAsMap(client().performRequest(clusterStateRequest));
|
||||
Map<String, Object> metadata = (Map<String, Object>) response.get("metadata");
|
||||
assertNotNull(metadata);
|
||||
Map<String, Object> indices = (Map<String, Object>) metadata.get("indices");
|
||||
assertNotNull(indices);
|
||||
// Make sure our non-system index is still non-system
|
||||
assertThat(new JsonMapView(indices).get("test_index_old.system"), is(false));
|
||||
|
||||
Map<String, Object> tasksIndex = (Map<String, Object>) indices.get(".tasks");
|
||||
assertNotNull(tasksIndex);
|
||||
assertThat(tasksIndex.get("system"), is(true));
|
||||
// Can't get the .tasks index via JsonMapView because it splits on `.`
|
||||
assertThat(indices, hasKey(".tasks"));
|
||||
JsonMapView tasksIndex = new JsonMapView((Map<String, Object>) indices.get(".tasks"));
|
||||
assertThat(tasksIndex.get("system"), is(true));
|
||||
|
||||
Map<String, Object> testIndex = (Map<String, Object>) indices.get("test_index_old");
|
||||
assertNotNull(testIndex);
|
||||
assertThat(testIndex.get("system"), is(false));
|
||||
});
|
||||
// If .tasks was created in a 7.x version, it should have an alias on it that we need to make sure got upgraded properly.
|
||||
final String tasksCreatedVersionString = tasksIndex.get("settings.index.version.created");
|
||||
assertThat(tasksCreatedVersionString, notNullValue());
|
||||
final Version tasksCreatedVersion = Version.fromId(Integer.parseInt(tasksCreatedVersionString));
|
||||
if (tasksCreatedVersion.before(SYSTEM_INDEX_ENFORCEMENT_VERSION)) {
|
||||
// Verify that the alias survived the upgrade
|
||||
Request getAliasRequest = new Request("GET", "/_alias/test-system-alias");
|
||||
getAliasRequest.setOptions(expectVersionSpecificWarnings(v -> {
|
||||
v.current(systemIndexWarning);
|
||||
v.compatible(systemIndexWarning);
|
||||
}));
|
||||
Map<String, Object> aliasResponse = entityAsMap(client().performRequest(getAliasRequest));
|
||||
assertThat(aliasResponse, hasKey(".tasks"));
|
||||
assertThat(aliasResponse, hasKey("test_index_reindex"));
|
||||
}
|
||||
});
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
|
@ -87,9 +87,15 @@
|
|||
---
|
||||
"Find a task result record from the old cluster":
|
||||
- skip:
|
||||
features: headers
|
||||
features:
|
||||
- headers
|
||||
- allowed_warnings
|
||||
|
||||
- do:
|
||||
# We don't require this warning because there's a very brief window during upgrades before the IndexMetaData is upgraded when warnings
|
||||
# may not be emitted. That they do get upgraded is tested specifically by FullClusterRestartIT#testSystemIndexMetadataIsUpgraded().
|
||||
allowed_warnings:
|
||||
- "this request accesses system indices: [.tasks], but in a future major version, direct access to system indices will be prevented by default"
|
||||
search:
|
||||
rest_total_hits_as_int: true
|
||||
index: .tasks
|
||||
|
|
|
@ -0,0 +1,173 @@
|
|||
/*
|
||||
* Licensed to Elasticsearch under one or more contributor
|
||||
* license agreements. See the NOTICE file distributed with
|
||||
* this work for additional information regarding copyright
|
||||
* ownership. Elasticsearch licenses this file to you under
|
||||
* the Apache License, Version 2.0 (the "License"); you may
|
||||
* not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing,
|
||||
* software distributed under the License is distributed on an
|
||||
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
|
||||
* KIND, either express or implied. See the License for the
|
||||
* specific language governing permissions and limitations
|
||||
* under the License.
|
||||
*/
|
||||
|
||||
package org.elasticsearch.http;
|
||||
|
||||
import org.elasticsearch.action.index.IndexRequest;
|
||||
import org.elasticsearch.action.support.WriteRequest;
|
||||
import org.elasticsearch.client.Request;
|
||||
import org.elasticsearch.client.RequestOptions;
|
||||
import org.elasticsearch.client.Response;
|
||||
import org.elasticsearch.client.node.NodeClient;
|
||||
import org.elasticsearch.cluster.metadata.IndexNameExpressionResolver;
|
||||
import org.elasticsearch.cluster.node.DiscoveryNodes;
|
||||
import org.elasticsearch.common.settings.ClusterSettings;
|
||||
import org.elasticsearch.common.settings.IndexScopedSettings;
|
||||
import org.elasticsearch.common.settings.Settings;
|
||||
import org.elasticsearch.common.settings.SettingsFilter;
|
||||
import org.elasticsearch.indices.SystemIndexDescriptor;
|
||||
import org.elasticsearch.plugins.Plugin;
|
||||
import org.elasticsearch.plugins.SystemIndexPlugin;
|
||||
import org.elasticsearch.rest.BaseRestHandler;
|
||||
import org.elasticsearch.rest.RestController;
|
||||
import org.elasticsearch.rest.RestHandler;
|
||||
import org.elasticsearch.rest.RestRequest;
|
||||
import org.elasticsearch.rest.action.RestStatusToXContentListener;
|
||||
|
||||
import java.io.IOException;
|
||||
import java.util.ArrayList;
|
||||
import java.util.Collection;
|
||||
import java.util.Collections;
|
||||
import java.util.List;
|
||||
import java.util.Map;
|
||||
import java.util.function.Supplier;
|
||||
|
||||
import static org.elasticsearch.test.rest.ESRestTestCase.entityAsMap;
|
||||
import static org.hamcrest.Matchers.equalTo;
|
||||
import static org.hamcrest.Matchers.hasKey;
|
||||
import static org.hamcrest.Matchers.is;
|
||||
|
||||
public class SystemIndexRestIT extends HttpSmokeTestCase {
|
||||
|
||||
@Override
|
||||
protected Collection<Class<? extends Plugin>> nodePlugins() {
|
||||
List<Class<? extends Plugin>> plugins = new ArrayList<>(super.nodePlugins());
|
||||
plugins.add(SystemIndexTestPlugin.class);
|
||||
return plugins;
|
||||
}
|
||||
|
||||
public void testSystemIndexAccessBlockedByDefault() throws Exception {
|
||||
// create index
|
||||
{
|
||||
Request putDocRequest = new Request("POST", "/_sys_index_test/add_doc/42");
|
||||
Response resp = getRestClient().performRequest(putDocRequest);
|
||||
assertThat(resp.getStatusLine().getStatusCode(), equalTo(201));
|
||||
}
|
||||
|
||||
|
||||
// make sure the system index now exists
|
||||
assertBusy(() -> {
|
||||
Request searchRequest = new Request("GET", "/" + SystemIndexTestPlugin.SYSTEM_INDEX_NAME + "/_count");
|
||||
searchRequest.setOptions(expectWarnings("this request accesses system indices: [" + SystemIndexTestPlugin.SYSTEM_INDEX_NAME +
|
||||
"], but in a future major version, direct access to system indices will be prevented by default"));
|
||||
|
||||
// Disallow no indices to cause an exception if the flag above doesn't work
|
||||
searchRequest.addParameter("allow_no_indices", "false");
|
||||
searchRequest.setJsonEntity("{\"query\": {\"match\": {\"some_field\": \"some_value\"}}}");
|
||||
|
||||
final Response searchResponse = getRestClient().performRequest(searchRequest);
|
||||
assertThat(searchResponse.getStatusLine().getStatusCode(), is(200));
|
||||
Map<String, Object> responseMap = entityAsMap(searchResponse);
|
||||
assertThat(responseMap, hasKey("count"));
|
||||
assertThat(responseMap.get("count"), equalTo(1));
|
||||
});
|
||||
|
||||
// And with a partial wildcard
|
||||
assertDeprecationWarningOnAccess(".test-*", SystemIndexTestPlugin.SYSTEM_INDEX_NAME);
|
||||
|
||||
// And with a total wildcard
|
||||
assertDeprecationWarningOnAccess(randomFrom("*", "_all"), SystemIndexTestPlugin.SYSTEM_INDEX_NAME);
|
||||
|
||||
// Try to index a doc directly
|
||||
{
|
||||
String expectedWarning = "this request accesses system indices: [" + SystemIndexTestPlugin.SYSTEM_INDEX_NAME + "], but in a " +
|
||||
"future major version, direct access to system indices will be prevented by default";
|
||||
Request putDocDirectlyRequest = new Request("PUT", "/" + SystemIndexTestPlugin.SYSTEM_INDEX_NAME + "/_doc/43");
|
||||
putDocDirectlyRequest.setJsonEntity("{\"some_field\": \"some_other_value\"}");
|
||||
putDocDirectlyRequest.setOptions(expectWarnings(expectedWarning));
|
||||
Response response = getRestClient().performRequest(putDocDirectlyRequest);
|
||||
assertThat(response.getStatusLine().getStatusCode(), equalTo(201));
|
||||
}
|
||||
}
|
||||
|
||||
private void assertDeprecationWarningOnAccess(String queryPattern, String warningIndexName) throws IOException {
|
||||
String expectedWarning = "this request accesses system indices: [" + warningIndexName + "], but in a " +
|
||||
"future major version, direct access to system indices will be prevented by default";
|
||||
Request searchRequest = new Request("GET", "/" + queryPattern + randomFrom("/_count", "/_search"));
|
||||
searchRequest.setJsonEntity("{\"query\": {\"match\": {\"some_field\": \"some_value\"}}}");
|
||||
// Disallow no indices to cause an exception if this resolves to zero indices, so that we're sure it resolved the index
|
||||
searchRequest.addParameter("allow_no_indices", "false");
|
||||
searchRequest.setOptions(expectWarnings(expectedWarning));
|
||||
|
||||
Response response = getRestClient().performRequest(searchRequest);
|
||||
assertThat(response.getStatusLine().getStatusCode(), equalTo(200));
|
||||
}
|
||||
|
||||
private RequestOptions expectWarnings(String expectedWarning) {
|
||||
final RequestOptions.Builder builder = RequestOptions.DEFAULT.toBuilder();
|
||||
builder.setWarningsHandler(w -> w.contains(expectedWarning) == false || w.size() != 1);
|
||||
return builder.build();
|
||||
}
|
||||
|
||||
|
||||
public static class SystemIndexTestPlugin extends Plugin implements SystemIndexPlugin {
|
||||
|
||||
public static final String SYSTEM_INDEX_NAME = ".test-system-idx";
|
||||
|
||||
@Override
|
||||
public List<RestHandler> getRestHandlers(Settings settings, RestController restController, ClusterSettings clusterSettings,
|
||||
IndexScopedSettings indexScopedSettings, SettingsFilter settingsFilter,
|
||||
IndexNameExpressionResolver indexNameExpressionResolver,
|
||||
Supplier<DiscoveryNodes> nodesInCluster) {
|
||||
return org.elasticsearch.common.collect.List.of(new AddDocRestHandler());
|
||||
}
|
||||
|
||||
@Override
|
||||
public Collection<SystemIndexDescriptor> getSystemIndexDescriptors(Settings settings) {
|
||||
return Collections.singletonList(new SystemIndexDescriptor(SYSTEM_INDEX_NAME, "System indices for tests"));
|
||||
}
|
||||
|
||||
public static class AddDocRestHandler extends BaseRestHandler {
|
||||
@Override
|
||||
public boolean allowSystemIndexAccessByDefault() {
|
||||
return true;
|
||||
}
|
||||
|
||||
@Override
|
||||
public String getName() {
|
||||
return "system_index_test_doc_adder";
|
||||
}
|
||||
|
||||
@Override
|
||||
public List<Route> routes() {
|
||||
return org.elasticsearch.common.collect.List.of(new Route(RestRequest.Method.POST, "/_sys_index_test/add_doc/{id}"));
|
||||
}
|
||||
|
||||
@Override
|
||||
protected RestChannelConsumer prepareRequest(RestRequest request, NodeClient client) throws IOException {
|
||||
IndexRequest indexRequest = new IndexRequest(SYSTEM_INDEX_NAME);
|
||||
indexRequest.id(request.param("id"));
|
||||
indexRequest.setRefreshPolicy(WriteRequest.RefreshPolicy.IMMEDIATE);
|
||||
indexRequest.source(org.elasticsearch.common.collect.Map.of("some_field", "some_value"));
|
||||
return channel -> client.index(indexRequest,
|
||||
new RestStatusToXContentListener<>(channel, r -> r.getLocation(indexRequest.routing())));
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
|
@ -392,7 +392,7 @@ public class IndicesRequestIT extends ESIntegTestCase {
|
|||
internalCluster().coordOnlyNodeClient().admin().indices().flush(flushRequest).actionGet();
|
||||
|
||||
clearInterceptedActions();
|
||||
String[] indices = new IndexNameExpressionResolver()
|
||||
String[] indices = new IndexNameExpressionResolver(new ThreadContext(Settings.EMPTY))
|
||||
.concreteIndexNames(client().admin().cluster().prepareState().get().getState(), flushRequest);
|
||||
assertIndicesSubset(Arrays.asList(indices), indexShardActions);
|
||||
}
|
||||
|
@ -417,7 +417,7 @@ public class IndicesRequestIT extends ESIntegTestCase {
|
|||
internalCluster().coordOnlyNodeClient().admin().indices().refresh(refreshRequest).actionGet();
|
||||
|
||||
clearInterceptedActions();
|
||||
String[] indices = new IndexNameExpressionResolver()
|
||||
String[] indices = new IndexNameExpressionResolver(new ThreadContext(Settings.EMPTY))
|
||||
.concreteIndexNames(client().admin().cluster().prepareState().get().getState(), refreshRequest);
|
||||
assertIndicesSubset(Arrays.asList(indices), indexShardActions);
|
||||
}
|
||||
|
|
|
@ -56,6 +56,7 @@ import org.elasticsearch.common.settings.Setting;
|
|||
import org.elasticsearch.common.settings.Settings;
|
||||
import org.elasticsearch.common.settings.SettingsFilter;
|
||||
import org.elasticsearch.common.unit.TimeValue;
|
||||
import org.elasticsearch.common.util.concurrent.ThreadContext;
|
||||
import org.elasticsearch.common.util.set.Sets;
|
||||
import org.elasticsearch.common.xcontent.NamedXContentRegistry;
|
||||
import org.elasticsearch.common.xcontent.XContentParser;
|
||||
|
@ -821,7 +822,7 @@ public class DedicatedClusterSnapshotRestoreIT extends AbstractSnapshotIntegTest
|
|||
public void testSnapshotWithDateMath() {
|
||||
final String repo = "repo";
|
||||
|
||||
final IndexNameExpressionResolver nameExpressionResolver = new IndexNameExpressionResolver();
|
||||
final IndexNameExpressionResolver nameExpressionResolver = new IndexNameExpressionResolver(new ThreadContext(Settings.EMPTY));
|
||||
final String snapshotName = "<snapshot-{now/d}>";
|
||||
|
||||
logger.info("--> creating repository");
|
||||
|
|
|
@ -25,26 +25,38 @@ import org.elasticsearch.cluster.ClusterState;
|
|||
import org.elasticsearch.cluster.block.ClusterBlockException;
|
||||
import org.elasticsearch.cluster.block.ClusterBlockLevel;
|
||||
import org.elasticsearch.cluster.metadata.AliasMetadata;
|
||||
import org.elasticsearch.cluster.metadata.IndexMetadata;
|
||||
import org.elasticsearch.cluster.metadata.IndexNameExpressionResolver;
|
||||
import org.elasticsearch.cluster.service.ClusterService;
|
||||
import org.elasticsearch.common.collect.ImmutableOpenMap;
|
||||
import org.elasticsearch.common.inject.Inject;
|
||||
import org.elasticsearch.common.io.stream.StreamInput;
|
||||
import org.elasticsearch.common.logging.DeprecationLogger;
|
||||
import org.elasticsearch.common.util.concurrent.ThreadContext;
|
||||
import org.elasticsearch.indices.SystemIndices;
|
||||
import org.elasticsearch.threadpool.ThreadPool;
|
||||
import org.elasticsearch.transport.TransportService;
|
||||
|
||||
import java.io.IOException;
|
||||
import java.util.ArrayList;
|
||||
import java.util.Arrays;
|
||||
import java.util.Collections;
|
||||
import java.util.Iterator;
|
||||
import java.util.List;
|
||||
import java.util.stream.Collectors;
|
||||
|
||||
public class TransportGetAliasesAction extends TransportMasterNodeReadAction<GetAliasesRequest, GetAliasesResponse> {
|
||||
private static final DeprecationLogger deprecationLogger = DeprecationLogger.getLogger(TransportGetAliasesAction.class);
|
||||
|
||||
private final SystemIndices systemIndices;
|
||||
|
||||
@Inject
|
||||
public TransportGetAliasesAction(TransportService transportService, ClusterService clusterService,
|
||||
ThreadPool threadPool, ActionFilters actionFilters,
|
||||
IndexNameExpressionResolver indexNameExpressionResolver) {
|
||||
IndexNameExpressionResolver indexNameExpressionResolver, SystemIndices systemIndices) {
|
||||
super(GetAliasesAction.NAME, transportService, clusterService, threadPool, actionFilters, GetAliasesRequest::new,
|
||||
indexNameExpressionResolver);
|
||||
this.systemIndices = systemIndices;
|
||||
}
|
||||
|
||||
@Override
|
||||
|
@ -55,8 +67,9 @@ public class TransportGetAliasesAction extends TransportMasterNodeReadAction<Get
|
|||
|
||||
@Override
|
||||
protected ClusterBlockException checkBlock(GetAliasesRequest request, ClusterState state) {
|
||||
// Resolve with system index access since we're just checking blocks
|
||||
return state.blocks().indicesBlockedException(ClusterBlockLevel.METADATA_READ,
|
||||
indexNameExpressionResolver.concreteIndexNames(state, request));
|
||||
indexNameExpressionResolver.concreteIndexNamesWithSystemIndexAccess(state, request));
|
||||
}
|
||||
|
||||
@Override
|
||||
|
@ -66,16 +79,25 @@ public class TransportGetAliasesAction extends TransportMasterNodeReadAction<Get
|
|||
|
||||
@Override
|
||||
protected void masterOperation(GetAliasesRequest request, ClusterState state, ActionListener<GetAliasesResponse> listener) {
|
||||
String[] concreteIndices = indexNameExpressionResolver.concreteIndexNames(state, request);
|
||||
String[] concreteIndices;
|
||||
// Switch to a context which will drop any deprecation warnings, because there may be indices resolved here which are not
|
||||
// returned in the final response. We'll add warnings back later if necessary in checkSystemIndexAccess.
|
||||
try (ThreadContext.StoredContext ignore = threadPool.getThreadContext().newStoredContext(false)) {
|
||||
concreteIndices = indexNameExpressionResolver.concreteIndexNames(state, request);
|
||||
}
|
||||
final boolean systemIndexAccessAllowed = indexNameExpressionResolver.isSystemIndexAccessAllowed();
|
||||
ImmutableOpenMap<String, List<AliasMetadata>> aliases = state.metadata().findAliases(request, concreteIndices);
|
||||
listener.onResponse(new GetAliasesResponse(postProcess(request, concreteIndices, aliases)));
|
||||
listener.onResponse(new GetAliasesResponse(postProcess(request, concreteIndices, aliases, state,
|
||||
systemIndexAccessAllowed, systemIndices)));
|
||||
}
|
||||
|
||||
/**
|
||||
* Fills alias result with empty entries for requested indices when no specific aliases were requested.
|
||||
*/
|
||||
static ImmutableOpenMap<String, List<AliasMetadata>> postProcess(GetAliasesRequest request, String[] concreteIndices,
|
||||
ImmutableOpenMap<String, List<AliasMetadata>> aliases) {
|
||||
ImmutableOpenMap<String, List<AliasMetadata>> aliases,
|
||||
ClusterState state, boolean systemIndexAccessAllowed,
|
||||
SystemIndices systemIndices) {
|
||||
boolean noAliasesSpecified = request.getOriginalAliases() == null || request.getOriginalAliases().length == 0;
|
||||
ImmutableOpenMap.Builder<String, List<AliasMetadata>> mapBuilder = ImmutableOpenMap.builder(aliases);
|
||||
for (String index : concreteIndices) {
|
||||
|
@ -84,7 +106,40 @@ public class TransportGetAliasesAction extends TransportMasterNodeReadAction<Get
|
|||
assert previous == null;
|
||||
}
|
||||
}
|
||||
return mapBuilder.build();
|
||||
final ImmutableOpenMap<String, List<AliasMetadata>> finalResponse = mapBuilder.build();
|
||||
if (systemIndexAccessAllowed == false) {
|
||||
checkSystemIndexAccess(request, systemIndices, state, finalResponse);
|
||||
}
|
||||
return finalResponse;
|
||||
}
|
||||
|
||||
private static void checkSystemIndexAccess(GetAliasesRequest request, SystemIndices systemIndices, ClusterState state,
|
||||
ImmutableOpenMap<String, List<AliasMetadata>> aliasesMap) {
|
||||
List<String> systemIndicesNames = new ArrayList<>();
|
||||
for (Iterator<String> it = aliasesMap.keysIt(); it.hasNext(); ) {
|
||||
String indexName = it.next();
|
||||
IndexMetadata index = state.metadata().index(indexName);
|
||||
if (index != null && index.isSystem()) {
|
||||
systemIndicesNames.add(indexName);
|
||||
}
|
||||
}
|
||||
if (systemIndicesNames.isEmpty() == false) {
|
||||
deprecationLogger.deprecate("open_system_index_access",
|
||||
"this request accesses system indices: {}, but in a future major version, direct access to system " +
|
||||
"indices will be prevented by default", systemIndicesNames);
|
||||
} else {
|
||||
checkSystemAliasAccess(request, systemIndices);
|
||||
}
|
||||
}
|
||||
|
||||
private static void checkSystemAliasAccess(GetAliasesRequest request, SystemIndices systemIndices) {
|
||||
final List<String> systemAliases = Arrays.stream(request.aliases())
|
||||
.filter(alias -> systemIndices.isSystemIndex(alias))
|
||||
.collect(Collectors.toList());
|
||||
if (systemAliases.isEmpty() == false) {
|
||||
deprecationLogger.deprecate("open_system_alias_access",
|
||||
"this request accesses aliases with names reserved for system indices: {}, but in a future major version, direct" +
|
||||
"access to system indices and their aliases will not be allowed", systemAliases);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
|
@ -220,6 +220,17 @@ public abstract class TransportBroadcastByNodeAction<Request extends BroadcastRe
|
|||
*/
|
||||
protected abstract ClusterBlockException checkRequestBlock(ClusterState state, Request request, String[] concreteIndices);
|
||||
|
||||
/**
|
||||
* Resolves a list of concrete index names. Override this if index names should be resolved differently than normal.
|
||||
*
|
||||
* @param clusterState the cluster state
|
||||
* @param request the underlying request
|
||||
* @return a list of concrete index names that this action should operate on
|
||||
*/
|
||||
protected String[] resolveConcreteIndexNames(ClusterState clusterState, Request request) {
|
||||
return indexNameExpressionResolver.concreteIndexNames(clusterState, request);
|
||||
}
|
||||
|
||||
@Override
|
||||
protected void doExecute(Task task, Request request, ActionListener<Response> listener) {
|
||||
new AsyncAction(task, request, listener).start();
|
||||
|
@ -249,7 +260,7 @@ public abstract class TransportBroadcastByNodeAction<Request extends BroadcastRe
|
|||
throw globalBlockException;
|
||||
}
|
||||
|
||||
String[] concreteIndices = indexNameExpressionResolver.concreteIndexNames(clusterState, request);
|
||||
String[] concreteIndices = resolveConcreteIndexNames(clusterState, request);
|
||||
ClusterBlockException requestBlockException = checkRequestBlock(clusterState, request, concreteIndices);
|
||||
if (requestBlockException != null) {
|
||||
throw requestBlockException;
|
||||
|
|
|
@ -23,10 +23,10 @@ import org.elasticsearch.cluster.action.index.MappingUpdatedAction;
|
|||
import org.elasticsearch.cluster.action.index.NodeMappingRefreshAction;
|
||||
import org.elasticsearch.cluster.action.shard.ShardStateAction;
|
||||
import org.elasticsearch.cluster.metadata.ComponentTemplateMetadata;
|
||||
import org.elasticsearch.cluster.metadata.ComposableIndexTemplateMetadata;
|
||||
import org.elasticsearch.cluster.metadata.DataStreamMetadata;
|
||||
import org.elasticsearch.cluster.metadata.IndexGraveyard;
|
||||
import org.elasticsearch.cluster.metadata.IndexNameExpressionResolver;
|
||||
import org.elasticsearch.cluster.metadata.ComposableIndexTemplateMetadata;
|
||||
import org.elasticsearch.cluster.metadata.Metadata;
|
||||
import org.elasticsearch.cluster.metadata.MetadataDeleteIndexService;
|
||||
import org.elasticsearch.cluster.metadata.MetadataIndexAliasesService;
|
||||
|
@ -69,6 +69,7 @@ import org.elasticsearch.common.settings.Setting;
|
|||
import org.elasticsearch.common.settings.Setting.Property;
|
||||
import org.elasticsearch.common.settings.Settings;
|
||||
import org.elasticsearch.common.util.set.Sets;
|
||||
import org.elasticsearch.common.util.concurrent.ThreadContext;
|
||||
import org.elasticsearch.common.xcontent.NamedXContentRegistry;
|
||||
import org.elasticsearch.gateway.GatewayAllocator;
|
||||
import org.elasticsearch.ingest.IngestMetadata;
|
||||
|
@ -111,13 +112,13 @@ public class ClusterModule extends AbstractModule {
|
|||
final ShardsAllocator shardsAllocator;
|
||||
|
||||
public ClusterModule(Settings settings, ClusterService clusterService, List<ClusterPlugin> clusterPlugins,
|
||||
ClusterInfoService clusterInfoService, SnapshotsInfoService snapshotsInfoService) {
|
||||
ClusterInfoService clusterInfoService, SnapshotsInfoService snapshotsInfoService, ThreadContext threadContext) {
|
||||
this.clusterPlugins = clusterPlugins;
|
||||
this.deciderList = createAllocationDeciders(settings, clusterService.getClusterSettings(), clusterPlugins);
|
||||
this.allocationDeciders = new AllocationDeciders(deciderList);
|
||||
this.shardsAllocator = createShardsAllocator(settings, clusterService.getClusterSettings(), clusterPlugins);
|
||||
this.clusterService = clusterService;
|
||||
this.indexNameExpressionResolver = new IndexNameExpressionResolver();
|
||||
this.indexNameExpressionResolver = new IndexNameExpressionResolver(threadContext);
|
||||
this.allocationService = new AllocationService(allocationDeciders, shardsAllocator, clusterInfoService, snapshotsInfoService);
|
||||
}
|
||||
|
||||
|
|
|
@ -81,7 +81,7 @@ public interface IndexAbstraction {
|
|||
boolean isHidden();
|
||||
|
||||
/**
|
||||
* @return whether this index abstraction is hidden or not
|
||||
* @return whether this index abstraction should be treated as a system index or not
|
||||
*/
|
||||
boolean isSystem();
|
||||
|
||||
|
|
|
@ -20,18 +20,22 @@
|
|||
package org.elasticsearch.cluster.metadata;
|
||||
|
||||
import org.elasticsearch.ElasticsearchParseException;
|
||||
import org.elasticsearch.Version;
|
||||
import org.elasticsearch.action.IndicesRequest;
|
||||
import org.elasticsearch.action.support.IndicesOptions;
|
||||
import org.elasticsearch.cluster.ClusterState;
|
||||
import org.elasticsearch.common.Booleans;
|
||||
import org.elasticsearch.common.Nullable;
|
||||
import org.elasticsearch.common.Strings;
|
||||
import org.elasticsearch.common.collect.ImmutableOpenMap;
|
||||
import org.elasticsearch.common.collect.Tuple;
|
||||
import org.elasticsearch.common.logging.DeprecationLogger;
|
||||
import org.elasticsearch.common.regex.Regex;
|
||||
import org.elasticsearch.common.time.DateFormatter;
|
||||
import org.elasticsearch.common.time.DateMathParser;
|
||||
import org.elasticsearch.common.time.DateUtils;
|
||||
import org.elasticsearch.common.util.CollectionUtils;
|
||||
import org.elasticsearch.common.util.concurrent.ThreadContext;
|
||||
import org.elasticsearch.common.util.set.Sets;
|
||||
import org.elasticsearch.index.Index;
|
||||
import org.elasticsearch.index.IndexNotFoundException;
|
||||
|
@ -59,20 +63,38 @@ import java.util.stream.Stream;
|
|||
import java.util.stream.StreamSupport;
|
||||
|
||||
public class IndexNameExpressionResolver {
|
||||
private static final DeprecationLogger deprecationLogger = DeprecationLogger.getLogger(IndexNameExpressionResolver.class);
|
||||
|
||||
public static final String EXCLUDED_DATA_STREAMS_KEY = "es.excluded_ds";
|
||||
public static final String SYSTEM_INDEX_ACCESS_CONTROL_HEADER_KEY = "_system_index_access_allowed";
|
||||
public static final Version SYSTEM_INDEX_ENFORCEMENT_VERSION = Version.V_7_10_0;
|
||||
|
||||
private final DateMathExpressionResolver dateMathExpressionResolver = new DateMathExpressionResolver();
|
||||
private final WildcardExpressionResolver wildcardExpressionResolver = new WildcardExpressionResolver();
|
||||
private final List<ExpressionResolver> expressionResolvers =
|
||||
org.elasticsearch.common.collect.List.of(dateMathExpressionResolver, wildcardExpressionResolver);
|
||||
|
||||
private final ThreadContext threadContext;
|
||||
|
||||
public IndexNameExpressionResolver(ThreadContext threadContext) {
|
||||
this.threadContext = Objects.requireNonNull(threadContext, "Thread Context must not be null");
|
||||
}
|
||||
|
||||
/**
|
||||
* Same as {@link #concreteIndexNames(ClusterState, IndicesOptions, String...)}, but the index expressions and options
|
||||
* are encapsulated in the specified request.
|
||||
*/
|
||||
public String[] concreteIndexNames(ClusterState state, IndicesRequest request) {
|
||||
Context context = new Context(state, request.indicesOptions(), false, false, request.includeDataStreams());
|
||||
Context context = new Context(state, request.indicesOptions(), false, false, request.includeDataStreams(),
|
||||
isSystemIndexAccessAllowed());
|
||||
return concreteIndexNames(context, request.indices());
|
||||
}
|
||||
|
||||
/**
|
||||
* Same as {@link #concreteIndexNames(ClusterState, IndicesRequest)}, but access to system indices is always allowed.
|
||||
*/
|
||||
public String[] concreteIndexNamesWithSystemIndexAccess(ClusterState state, IndicesRequest request) {
|
||||
Context context = new Context(state, request.indicesOptions(), false, false, request.includeDataStreams(), true);
|
||||
return concreteIndexNames(context, request.indices());
|
||||
}
|
||||
|
||||
|
@ -81,7 +103,8 @@ public class IndexNameExpressionResolver {
|
|||
* are encapsulated in the specified request and resolves data streams.
|
||||
*/
|
||||
public Index[] concreteIndices(ClusterState state, IndicesRequest request) {
|
||||
Context context = new Context(state, request.indicesOptions(), false, false, request.includeDataStreams());
|
||||
Context context = new Context(state, request.indicesOptions(), false, false, request.includeDataStreams(),
|
||||
isSystemIndexAccessAllowed());
|
||||
return concreteIndices(context, request.indices());
|
||||
}
|
||||
|
||||
|
@ -99,22 +122,23 @@ public class IndexNameExpressionResolver {
|
|||
* indices options in the context don't allow such a case.
|
||||
*/
|
||||
public String[] concreteIndexNames(ClusterState state, IndicesOptions options, String... indexExpressions) {
|
||||
Context context = new Context(state, options);
|
||||
Context context = new Context(state, options, isSystemIndexAccessAllowed());
|
||||
return concreteIndexNames(context, indexExpressions);
|
||||
}
|
||||
|
||||
public String[] concreteIndexNames(ClusterState state, IndicesOptions options, boolean includeDataStreams, String... indexExpressions) {
|
||||
Context context = new Context(state, options, false, false, includeDataStreams);
|
||||
Context context = new Context(state, options, false, false, includeDataStreams, isSystemIndexAccessAllowed());
|
||||
return concreteIndexNames(context, indexExpressions);
|
||||
}
|
||||
|
||||
public String[] concreteIndexNames(ClusterState state, IndicesOptions options, IndicesRequest request) {
|
||||
Context context = new Context(state, options, false, false, request.includeDataStreams());
|
||||
Context context = new Context(state, options, false, false, request.includeDataStreams(), isSystemIndexAccessAllowed());
|
||||
return concreteIndexNames(context, request.indices());
|
||||
}
|
||||
|
||||
public List<String> dataStreamNames(ClusterState state, IndicesOptions options, String... indexExpressions) {
|
||||
Context context = new Context(state, options, false, false, true, true);
|
||||
// Allow system index access - they'll be filtered out below as there's no such thing (yet) as system data streams
|
||||
Context context = new Context(state, options, false, false, true, true, true);
|
||||
if (indexExpressions == null || indexExpressions.length == 0) {
|
||||
indexExpressions = new String[]{"*"};
|
||||
}
|
||||
|
@ -146,7 +170,8 @@ public class IndexNameExpressionResolver {
|
|||
}
|
||||
|
||||
public Index[] concreteIndices(ClusterState state, IndicesOptions options, boolean includeDataStreams, String... indexExpressions) {
|
||||
Context context = new Context(state, options, false, false, includeDataStreams);
|
||||
Context context = new Context(state, options, false, false, includeDataStreams,
|
||||
isSystemIndexAccessAllowed());
|
||||
return concreteIndices(context, indexExpressions);
|
||||
}
|
||||
|
||||
|
@ -163,7 +188,8 @@ public class IndexNameExpressionResolver {
|
|||
* indices options in the context don't allow such a case.
|
||||
*/
|
||||
public Index[] concreteIndices(ClusterState state, IndicesRequest request, long startTime) {
|
||||
Context context = new Context(state, request.indicesOptions(), startTime, false, false, request.includeDataStreams(), false);
|
||||
Context context = new Context(state, request.indicesOptions(), startTime, false, false, request.includeDataStreams(), false,
|
||||
isSystemIndexAccessAllowed());
|
||||
return concreteIndices(context, request.indices());
|
||||
}
|
||||
|
||||
|
@ -283,9 +309,26 @@ public class IndexNameExpressionResolver {
|
|||
}
|
||||
throw infe;
|
||||
}
|
||||
checkSystemIndexAccess(context, metadata, concreteIndices, indexExpressions);
|
||||
return concreteIndices.toArray(new Index[concreteIndices.size()]);
|
||||
}
|
||||
|
||||
private void checkSystemIndexAccess(Context context, Metadata metadata, Set<Index> concreteIndices, String[] originalPatterns) {
|
||||
if (context.isSystemIndexAccessAllowed() == false) {
|
||||
final List<String> resolvedSystemIndices = concreteIndices.stream()
|
||||
.map(metadata::index)
|
||||
.filter(IndexMetadata::isSystem)
|
||||
.map(i -> i.getIndex().getName())
|
||||
.sorted() // reliable order for testing
|
||||
.collect(Collectors.toList());
|
||||
if (resolvedSystemIndices.isEmpty() == false) {
|
||||
deprecationLogger.deprecate("open_system_index_access",
|
||||
"this request accesses system indices: {}, but in a future major version, direct access to system " +
|
||||
"indices will be prevented by default", resolvedSystemIndices);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
private static boolean shouldTrackConcreteIndex(Context context, IndicesOptions options, IndexMetadata index) {
|
||||
if (index.getState() == IndexMetadata.State.CLOSE) {
|
||||
if (options.forbidClosedIndices() && options.ignoreUnavailable() == false) {
|
||||
|
@ -370,7 +413,7 @@ public class IndexNameExpressionResolver {
|
|||
options.allowAliasesToMultipleIndices(), options.forbidClosedIndices(), options.ignoreAliases(),
|
||||
options.ignoreThrottled());
|
||||
|
||||
Context context = new Context(state, combinedOptions, false, true, includeDataStreams);
|
||||
Context context = new Context(state, combinedOptions, false, true, includeDataStreams, isSystemIndexAccessAllowed());
|
||||
Index[] indices = concreteIndices(context, index);
|
||||
if (allowNoIndices && indices.length == 0) {
|
||||
return null;
|
||||
|
@ -387,7 +430,7 @@ public class IndexNameExpressionResolver {
|
|||
* If the data stream, index or alias contains date math then that is resolved too.
|
||||
*/
|
||||
public boolean hasIndexAbstraction(String indexAbstraction, ClusterState state) {
|
||||
Context context = new Context(state, IndicesOptions.lenientExpandOpen(), false, false, true);
|
||||
Context context = new Context(state, IndicesOptions.lenientExpandOpen(), false, false, true, isSystemIndexAccessAllowed());
|
||||
String resolvedAliasOrIndex = dateMathExpressionResolver.resolveExpression(indexAbstraction, context);
|
||||
return state.metadata().getIndicesLookup().containsKey(resolvedAliasOrIndex);
|
||||
}
|
||||
|
@ -398,14 +441,14 @@ public class IndexNameExpressionResolver {
|
|||
public String resolveDateMathExpression(String dateExpression) {
|
||||
// The data math expression resolver doesn't rely on cluster state or indices options, because
|
||||
// it just resolves the date math to an actual date.
|
||||
return dateMathExpressionResolver.resolveExpression(dateExpression, new Context(null, null));
|
||||
return dateMathExpressionResolver.resolveExpression(dateExpression, new Context(null, null, isSystemIndexAccessAllowed()));
|
||||
}
|
||||
|
||||
/**
|
||||
* Resolve an array of expressions to the set of indices and aliases that these expressions match.
|
||||
*/
|
||||
public Set<String> resolveExpressions(ClusterState state, String... expressions) {
|
||||
Context context = new Context(state, IndicesOptions.lenientExpandOpen(), true, false, true);
|
||||
Context context = new Context(state, IndicesOptions.lenientExpandOpen(), true, false, true, isSystemIndexAccessAllowed());
|
||||
List<String> resolvedExpressions = Arrays.asList(expressions);
|
||||
for (ExpressionResolver expressionResolver : expressionResolvers) {
|
||||
resolvedExpressions = expressionResolver.resolve(context, resolvedExpressions);
|
||||
|
@ -499,7 +542,7 @@ public class IndexNameExpressionResolver {
|
|||
*/
|
||||
public Map<String, Set<String>> resolveSearchRouting(ClusterState state, @Nullable String routing, String... expressions) {
|
||||
List<String> resolvedExpressions = expressions != null ? Arrays.asList(expressions) : Collections.emptyList();
|
||||
Context context = new Context(state, IndicesOptions.lenientExpandOpen(), false, false, true);
|
||||
Context context = new Context(state, IndicesOptions.lenientExpandOpen(), false, false, true, isSystemIndexAccessAllowed());
|
||||
for (ExpressionResolver expressionResolver : expressionResolvers) {
|
||||
resolvedExpressions = expressionResolver.resolve(context, resolvedExpressions);
|
||||
}
|
||||
|
@ -651,6 +694,15 @@ public class IndexNameExpressionResolver {
|
|||
return false;
|
||||
}
|
||||
|
||||
/**
|
||||
* Determines whether or not system index access should be allowed in the current context.
|
||||
*
|
||||
* @return True if system index access should be allowed, false otherwise.
|
||||
*/
|
||||
public boolean isSystemIndexAccessAllowed() {
|
||||
return Booleans.parseBoolean(threadContext.getHeader(SYSTEM_INDEX_ACCESS_CONTROL_HEADER_KEY), true);
|
||||
}
|
||||
|
||||
public static class Context {
|
||||
|
||||
private final ClusterState state;
|
||||
|
@ -660,27 +712,30 @@ public class IndexNameExpressionResolver {
|
|||
private final boolean resolveToWriteIndex;
|
||||
private final boolean includeDataStreams;
|
||||
private final boolean preserveDataStreams;
|
||||
private final boolean isSystemIndexAccessAllowed;
|
||||
|
||||
Context(ClusterState state, IndicesOptions options) {
|
||||
this(state, options, System.currentTimeMillis());
|
||||
Context(ClusterState state, IndicesOptions options, boolean isSystemIndexAccessAllowed) {
|
||||
this(state, options, System.currentTimeMillis(), isSystemIndexAccessAllowed);
|
||||
}
|
||||
|
||||
Context(ClusterState state, IndicesOptions options, boolean preserveAliases, boolean resolveToWriteIndex,
|
||||
boolean includeDataStreams) {
|
||||
this(state, options, System.currentTimeMillis(), preserveAliases, resolveToWriteIndex, includeDataStreams, false);
|
||||
boolean includeDataStreams, boolean isSystemIndexAccessAllowed) {
|
||||
this(state, options, System.currentTimeMillis(), preserveAliases, resolveToWriteIndex, includeDataStreams, false,
|
||||
isSystemIndexAccessAllowed);
|
||||
}
|
||||
|
||||
Context(ClusterState state, IndicesOptions options, boolean preserveAliases, boolean resolveToWriteIndex,
|
||||
boolean includeDataStreams, boolean preserveDataStreams) {
|
||||
this(state, options, System.currentTimeMillis(), preserveAliases, resolveToWriteIndex, includeDataStreams, preserveDataStreams);
|
||||
boolean includeDataStreams, boolean preserveDataStreams, boolean isSystemIndexAccessAllowed) {
|
||||
this(state, options, System.currentTimeMillis(), preserveAliases, resolveToWriteIndex, includeDataStreams, preserveDataStreams,
|
||||
isSystemIndexAccessAllowed);
|
||||
}
|
||||
|
||||
Context(ClusterState state, IndicesOptions options, long startTime) {
|
||||
this(state, options, startTime, false, false, false, false);
|
||||
Context(ClusterState state, IndicesOptions options, long startTime, boolean isSystemIndexAccessAllowed) {
|
||||
this(state, options, startTime, false, false, false, false, isSystemIndexAccessAllowed);
|
||||
}
|
||||
|
||||
protected Context(ClusterState state, IndicesOptions options, long startTime, boolean preserveAliases, boolean resolveToWriteIndex,
|
||||
boolean includeDataStreams, boolean preserveDataStreams) {
|
||||
boolean includeDataStreams, boolean preserveDataStreams, boolean isSystemIndexAccessAllowed) {
|
||||
this.state = state;
|
||||
this.options = options;
|
||||
this.startTime = startTime;
|
||||
|
@ -688,6 +743,7 @@ public class IndexNameExpressionResolver {
|
|||
this.resolveToWriteIndex = resolveToWriteIndex;
|
||||
this.includeDataStreams = includeDataStreams;
|
||||
this.preserveDataStreams = preserveDataStreams;
|
||||
this.isSystemIndexAccessAllowed = isSystemIndexAccessAllowed;
|
||||
}
|
||||
|
||||
public ClusterState getState() {
|
||||
|
@ -726,6 +782,13 @@ public class IndexNameExpressionResolver {
|
|||
public boolean isPreserveDataStreams() {
|
||||
return preserveDataStreams;
|
||||
}
|
||||
|
||||
/**
|
||||
* Used to determine if it is allowed to access system indices in this context (e.g. for this request).
|
||||
*/
|
||||
public boolean isSystemIndexAccessAllowed() {
|
||||
return isSystemIndexAccessAllowed;
|
||||
}
|
||||
}
|
||||
|
||||
private interface ExpressionResolver {
|
||||
|
|
|
@ -50,7 +50,6 @@ import static org.elasticsearch.tasks.TaskResultsService.TASK_INDEX;
|
|||
* to reduce the locations within the code that need to deal with {@link SystemIndexDescriptor}s.
|
||||
*/
|
||||
public class SystemIndices {
|
||||
|
||||
private static final Map<String, Collection<SystemIndexDescriptor>> SERVER_SYSTEM_INDEX_DESCRIPTORS = singletonMap(
|
||||
TaskResultsService.class.getName(), singletonList(new SystemIndexDescriptor(TASK_INDEX + "*", "Task Result Index"))
|
||||
);
|
||||
|
|
|
@ -425,7 +425,7 @@ public class Node implements Closeable {
|
|||
final InternalSnapshotsInfoService snapshotsInfoService = new InternalSnapshotsInfoService(settings, clusterService,
|
||||
repositoriesServiceReference::get, rerouteServiceReference::get);
|
||||
final ClusterModule clusterModule = new ClusterModule(settings, clusterService, clusterPlugins, clusterInfoService,
|
||||
snapshotsInfoService);
|
||||
snapshotsInfoService, threadPool.getThreadContext());
|
||||
modules.add(clusterModule);
|
||||
IndicesModule indicesModule = new IndicesModule(pluginsService.filterPlugins(MapperPlugin.class));
|
||||
modules.add(indicesModule);
|
||||
|
|
|
@ -54,6 +54,7 @@ import java.util.function.Supplier;
|
|||
import java.util.function.UnaryOperator;
|
||||
import java.util.stream.Collectors;
|
||||
|
||||
import static org.elasticsearch.cluster.metadata.IndexNameExpressionResolver.SYSTEM_INDEX_ACCESS_CONTROL_HEADER_KEY;
|
||||
import static org.elasticsearch.rest.BytesRestResponse.TEXT_CONTENT_TYPE;
|
||||
import static org.elasticsearch.rest.RestStatus.BAD_REQUEST;
|
||||
import static org.elasticsearch.rest.RestStatus.INTERNAL_SERVER_ERROR;
|
||||
|
@ -65,6 +66,7 @@ public class RestController implements HttpServerTransport.Dispatcher {
|
|||
|
||||
private static final Logger logger = LogManager.getLogger(RestController.class);
|
||||
private static final DeprecationLogger deprecationLogger = DeprecationLogger.getLogger(RestController.class);
|
||||
private static final String ELASTIC_PRODUCT_ORIGIN_HTTP_HEADER = "X-elastic-product-origin";
|
||||
|
||||
private static final BytesReference FAVICON_RESPONSE;
|
||||
|
||||
|
@ -246,6 +248,13 @@ public class RestController implements HttpServerTransport.Dispatcher {
|
|||
if (handler.allowsUnsafeBuffers() == false) {
|
||||
request.ensureSafeBuffers();
|
||||
}
|
||||
if (handler.allowSystemIndexAccessByDefault() == false && request.header(ELASTIC_PRODUCT_ORIGIN_HTTP_HEADER) == null) {
|
||||
// The ELASTIC_PRODUCT_ORIGIN_HTTP_HEADER indicates that the request is coming from an Elastic product with a plan
|
||||
// to move away from direct access to system indices, and thus deprecation warnings should not be emitted.
|
||||
// This header is intended for internal use only.
|
||||
client.threadPool().getThreadContext().putHeader(SYSTEM_INDEX_ACCESS_CONTROL_HEADER_KEY, Boolean.FALSE.toString());
|
||||
}
|
||||
|
||||
handler.handleRequest(request, responseChannel, client);
|
||||
} catch (Exception e) {
|
||||
responseChannel.sendResponse(new BytesRestResponse(responseChannel, e));
|
||||
|
|
|
@ -90,6 +90,15 @@ public interface RestHandler {
|
|||
return Collections.emptyList();
|
||||
}
|
||||
|
||||
|
||||
/**
|
||||
* Controls whether requests handled by this class are allowed to to access system indices by default.
|
||||
* @return {@code true} if requests handled by this class should be allowed to access system indices.
|
||||
*/
|
||||
default boolean allowSystemIndexAccessByDefault() {
|
||||
return false;
|
||||
}
|
||||
|
||||
class Route {
|
||||
|
||||
private final String path;
|
||||
|
|
|
@ -57,6 +57,11 @@ public class RestClusterAllocationExplainAction extends BaseRestHandler {
|
|||
return "cluster_allocation_explain_action";
|
||||
}
|
||||
|
||||
@Override
|
||||
public boolean allowSystemIndexAccessByDefault() {
|
||||
return true;
|
||||
}
|
||||
|
||||
@Override
|
||||
public RestChannelConsumer prepareRequest(final RestRequest request, final NodeClient client) throws IOException {
|
||||
ClusterAllocationExplainRequest req;
|
||||
|
|
|
@ -54,6 +54,11 @@ public class RestClusterHealthAction extends BaseRestHandler {
|
|||
return "cluster_health_action";
|
||||
}
|
||||
|
||||
@Override
|
||||
public boolean allowSystemIndexAccessByDefault() {
|
||||
return true;
|
||||
}
|
||||
|
||||
@Override
|
||||
public RestChannelConsumer prepareRequest(final RestRequest request, final NodeClient client) throws IOException {
|
||||
final ClusterHealthRequest clusterHealthRequest = fromRequest(request);
|
||||
|
|
|
@ -71,6 +71,11 @@ public class RestClusterRerouteAction extends BaseRestHandler {
|
|||
return "cluster_reroute_action";
|
||||
}
|
||||
|
||||
@Override
|
||||
public boolean allowSystemIndexAccessByDefault() {
|
||||
return true;
|
||||
}
|
||||
|
||||
@Override
|
||||
public RestChannelConsumer prepareRequest(final RestRequest request, final NodeClient client) throws IOException {
|
||||
ClusterRerouteRequest clusterRerouteRequest = createRequest(request);
|
||||
|
|
|
@ -71,6 +71,11 @@ public class RestClusterStateAction extends BaseRestHandler {
|
|||
new Route(GET, "/_cluster/state/{metric}/{indices}")));
|
||||
}
|
||||
|
||||
@Override
|
||||
public boolean allowSystemIndexAccessByDefault() {
|
||||
return true;
|
||||
}
|
||||
|
||||
@Override
|
||||
public RestChannelConsumer prepareRequest(final RestRequest request, final NodeClient client) throws IOException {
|
||||
final ClusterStateRequest clusterStateRequest = Requests.clusterStateRequest();
|
||||
|
|
|
@ -75,7 +75,8 @@ public class RestGetAliasesAction extends BaseRestHandler {
|
|||
}
|
||||
|
||||
static RestResponse buildRestResponse(boolean aliasesExplicitlyRequested, String[] requestedAliases,
|
||||
ImmutableOpenMap<String, List<AliasMetadata>> responseAliasMap, XContentBuilder builder) throws Exception {
|
||||
ImmutableOpenMap<String, List<AliasMetadata>> responseAliasMap,
|
||||
XContentBuilder builder) throws Exception {
|
||||
final Set<String> indicesToDisplay = new HashSet<>();
|
||||
final Set<String> returnedAliasNames = new HashSet<>();
|
||||
for (final ObjectObjectCursor<String, List<AliasMetadata>> cursor : responseAliasMap) {
|
||||
|
|
|
@ -57,6 +57,11 @@ public class RestIndicesShardStoresAction extends BaseRestHandler {
|
|||
return "indices_shard_stores_action";
|
||||
}
|
||||
|
||||
@Override
|
||||
public boolean allowSystemIndexAccessByDefault() {
|
||||
return true;
|
||||
}
|
||||
|
||||
@Override
|
||||
public RestChannelConsumer prepareRequest(final RestRequest request, final NodeClient client) throws IOException {
|
||||
IndicesShardStoresRequest indicesShardStoresRequest = new IndicesShardStoresRequest(
|
||||
|
|
|
@ -59,6 +59,11 @@ public class RestIndicesStatsAction extends BaseRestHandler {
|
|||
return "indices_stats_action";
|
||||
}
|
||||
|
||||
@Override
|
||||
public boolean allowSystemIndexAccessByDefault() {
|
||||
return true;
|
||||
}
|
||||
|
||||
static final Map<String, Consumer<IndicesStatsRequest>> METRICS;
|
||||
|
||||
static {
|
||||
|
|
|
@ -51,6 +51,11 @@ public class RestRecoveryAction extends BaseRestHandler {
|
|||
return "recovery_action";
|
||||
}
|
||||
|
||||
@Override
|
||||
public boolean allowSystemIndexAccessByDefault() {
|
||||
return true;
|
||||
}
|
||||
|
||||
@Override
|
||||
public RestChannelConsumer prepareRequest(final RestRequest request, final NodeClient client) throws IOException {
|
||||
|
||||
|
|
|
@ -50,6 +50,11 @@ public class RestAliasAction extends AbstractCatAction {
|
|||
return "cat_alias_action";
|
||||
}
|
||||
|
||||
@Override
|
||||
public boolean allowSystemIndexAccessByDefault() {
|
||||
return true;
|
||||
}
|
||||
|
||||
@Override
|
||||
protected RestChannelConsumer doCatRequest(final RestRequest request, final NodeClient client) {
|
||||
final GetAliasesRequest getAliasesRequest = request.hasParam("alias") ?
|
||||
|
|
|
@ -45,11 +45,17 @@ public class RestHealthAction extends AbstractCatAction {
|
|||
return "cat_health_action";
|
||||
}
|
||||
|
||||
@Override
|
||||
public boolean allowSystemIndexAccessByDefault() {
|
||||
return true;
|
||||
}
|
||||
|
||||
@Override
|
||||
protected void documentation(StringBuilder sb) {
|
||||
sb.append("/_cat/health\n");
|
||||
}
|
||||
|
||||
|
||||
@Override
|
||||
public RestChannelConsumer doCatRequest(final RestRequest request, final NodeClient client) {
|
||||
ClusterHealthRequest clusterHealthRequest = new ClusterHealthRequest();
|
||||
|
|
|
@ -82,6 +82,11 @@ public class RestIndicesAction extends AbstractCatAction {
|
|||
return "cat_indices_action";
|
||||
}
|
||||
|
||||
@Override
|
||||
public boolean allowSystemIndexAccessByDefault() {
|
||||
return true;
|
||||
}
|
||||
|
||||
@Override
|
||||
protected void documentation(StringBuilder sb) {
|
||||
sb.append("/_cat/indices\n");
|
||||
|
|
|
@ -57,6 +57,11 @@ public class RestSegmentsAction extends AbstractCatAction {
|
|||
return "cat_segments_action";
|
||||
}
|
||||
|
||||
@Override
|
||||
public boolean allowSystemIndexAccessByDefault() {
|
||||
return true;
|
||||
}
|
||||
|
||||
@Override
|
||||
protected RestChannelConsumer doCatRequest(final RestRequest request, final NodeClient client) {
|
||||
final String[] indices = Strings.splitStringByCommaToArray(request.param("index"));
|
||||
|
|
|
@ -73,6 +73,11 @@ public class RestShardsAction extends AbstractCatAction {
|
|||
return "cat_shards_action";
|
||||
}
|
||||
|
||||
@Override
|
||||
public boolean allowSystemIndexAccessByDefault() {
|
||||
return true;
|
||||
}
|
||||
|
||||
@Override
|
||||
protected void documentation(StringBuilder sb) {
|
||||
sb.append("/_cat/shards\n");
|
||||
|
|
|
@ -31,6 +31,7 @@ import org.elasticsearch.common.settings.IndexScopedSettings;
|
|||
import org.elasticsearch.common.settings.Settings;
|
||||
import org.elasticsearch.common.settings.SettingsFilter;
|
||||
import org.elasticsearch.common.settings.SettingsModule;
|
||||
import org.elasticsearch.common.util.concurrent.ThreadContext;
|
||||
import org.elasticsearch.plugins.ActionPlugin;
|
||||
import org.elasticsearch.plugins.ActionPlugin.ActionHandler;
|
||||
import org.elasticsearch.rest.RestChannel;
|
||||
|
@ -107,9 +108,10 @@ public class ActionModuleTests extends ESTestCase {
|
|||
public void testSetupRestHandlerContainsKnownBuiltin() {
|
||||
SettingsModule settings = new SettingsModule(Settings.EMPTY);
|
||||
UsageService usageService = new UsageService();
|
||||
ActionModule actionModule = new ActionModule(false, settings.getSettings(), new IndexNameExpressionResolver(),
|
||||
settings.getIndexScopedSettings(), settings.getClusterSettings(), settings.getSettingsFilter(), null, emptyList(), null,
|
||||
null, usageService, null);
|
||||
ActionModule actionModule = new ActionModule(false, settings.getSettings(),
|
||||
new IndexNameExpressionResolver(new ThreadContext(Settings.EMPTY)), settings.getIndexScopedSettings(),
|
||||
settings.getClusterSettings(), settings.getSettingsFilter(), null, emptyList(), null,
|
||||
null, usageService, null);
|
||||
actionModule.initRestHandlers(null);
|
||||
// At this point the easiest way to confirm that a handler is loaded is to try to register another one on top of it and to fail
|
||||
Exception e = expectThrows(IllegalArgumentException.class, () ->
|
||||
|
@ -146,9 +148,10 @@ public class ActionModuleTests extends ESTestCase {
|
|||
ThreadPool threadPool = new TestThreadPool(getTestName());
|
||||
try {
|
||||
UsageService usageService = new UsageService();
|
||||
ActionModule actionModule = new ActionModule(false, settings.getSettings(), new IndexNameExpressionResolver(),
|
||||
settings.getIndexScopedSettings(), settings.getClusterSettings(), settings.getSettingsFilter(), threadPool,
|
||||
singletonList(dupsMainAction), null, null, usageService, null);
|
||||
ActionModule actionModule = new ActionModule(false, settings.getSettings(),
|
||||
new IndexNameExpressionResolver(threadPool.getThreadContext()), settings.getIndexScopedSettings(),
|
||||
settings.getClusterSettings(), settings.getSettingsFilter(), threadPool, singletonList(dupsMainAction),
|
||||
null, null, usageService, null);
|
||||
Exception e = expectThrows(IllegalArgumentException.class, () -> actionModule.initRestHandlers(null));
|
||||
assertThat(e.getMessage(), startsWith("Cannot replace existing handler for [/] for method: GET"));
|
||||
} finally {
|
||||
|
@ -180,9 +183,10 @@ public class ActionModuleTests extends ESTestCase {
|
|||
ThreadPool threadPool = new TestThreadPool(getTestName());
|
||||
try {
|
||||
UsageService usageService = new UsageService();
|
||||
ActionModule actionModule = new ActionModule(false, settings.getSettings(), new IndexNameExpressionResolver(),
|
||||
settings.getIndexScopedSettings(), settings.getClusterSettings(), settings.getSettingsFilter(), threadPool,
|
||||
singletonList(registersFakeHandler), null, null, usageService, null);
|
||||
ActionModule actionModule = new ActionModule(false, settings.getSettings(),
|
||||
new IndexNameExpressionResolver(threadPool.getThreadContext()), settings.getIndexScopedSettings(),
|
||||
settings.getClusterSettings(), settings.getSettingsFilter(), threadPool, singletonList(registersFakeHandler),
|
||||
null, null, usageService, null);
|
||||
actionModule.initRestHandlers(null);
|
||||
// At this point the easiest way to confirm that a handler is loaded is to try to register another one on top of it and to fail
|
||||
Exception e = expectThrows(IllegalArgumentException.class, () ->
|
||||
|
|
|
@ -41,6 +41,7 @@ import org.elasticsearch.common.io.stream.StreamInput;
|
|||
import org.elasticsearch.common.settings.ClusterSettings;
|
||||
import org.elasticsearch.common.settings.Settings;
|
||||
import org.elasticsearch.common.unit.TimeValue;
|
||||
import org.elasticsearch.common.util.concurrent.ThreadContext;
|
||||
import org.elasticsearch.test.ESTestCase;
|
||||
import org.elasticsearch.test.transport.MockTransport;
|
||||
import org.elasticsearch.threadpool.TestThreadPool;
|
||||
|
@ -132,7 +133,7 @@ public class TransportAddVotingConfigExclusionsActionTests extends ESTestCase {
|
|||
clusterSettings = new ClusterSettings(nodeSettings, ClusterSettings.BUILT_IN_CLUSTER_SETTINGS);
|
||||
|
||||
new TransportAddVotingConfigExclusionsAction(nodeSettings, clusterSettings, transportService, clusterService, threadPool,
|
||||
new ActionFilters(emptySet()), new IndexNameExpressionResolver()); // registers action
|
||||
new ActionFilters(emptySet()), new IndexNameExpressionResolver(new ThreadContext(Settings.EMPTY))); // registers action
|
||||
|
||||
transportService.start();
|
||||
transportService.acceptIncomingRequests();
|
||||
|
|
|
@ -35,6 +35,7 @@ import org.elasticsearch.cluster.service.ClusterService;
|
|||
import org.elasticsearch.common.io.stream.StreamInput;
|
||||
import org.elasticsearch.common.settings.Settings;
|
||||
import org.elasticsearch.common.unit.TimeValue;
|
||||
import org.elasticsearch.common.util.concurrent.ThreadContext;
|
||||
import org.elasticsearch.test.ESTestCase;
|
||||
import org.elasticsearch.test.transport.MockTransport;
|
||||
import org.elasticsearch.threadpool.TestThreadPool;
|
||||
|
@ -95,7 +96,7 @@ public class TransportClearVotingConfigExclusionsActionTests extends ESTestCase
|
|||
TransportService.NOOP_TRANSPORT_INTERCEPTOR, boundTransportAddress -> localNode, null, emptySet());
|
||||
|
||||
new TransportClearVotingConfigExclusionsAction(transportService, clusterService, threadPool, new ActionFilters(emptySet()),
|
||||
new IndexNameExpressionResolver()); // registers action
|
||||
new IndexNameExpressionResolver(new ThreadContext(Settings.EMPTY))); // registers action
|
||||
|
||||
transportService.start();
|
||||
transportService.acceptIncomingRequests();
|
||||
|
|
|
@ -18,8 +18,14 @@
|
|||
*/
|
||||
package org.elasticsearch.action.admin.indices.alias.get;
|
||||
|
||||
import org.elasticsearch.Version;
|
||||
import org.elasticsearch.cluster.ClusterState;
|
||||
import org.elasticsearch.cluster.metadata.AliasMetadata;
|
||||
import org.elasticsearch.cluster.metadata.IndexMetadata;
|
||||
import org.elasticsearch.cluster.metadata.Metadata;
|
||||
import org.elasticsearch.common.collect.ImmutableOpenMap;
|
||||
import org.elasticsearch.indices.SystemIndexDescriptor;
|
||||
import org.elasticsearch.indices.SystemIndices;
|
||||
import org.elasticsearch.test.ESTestCase;
|
||||
|
||||
import java.util.Collections;
|
||||
|
@ -28,6 +34,7 @@ import java.util.List;
|
|||
import static org.hamcrest.Matchers.equalTo;
|
||||
|
||||
public class TransportGetAliasesActionTests extends ESTestCase {
|
||||
private final SystemIndices EMPTY_SYSTEM_INDICES = new SystemIndices(Collections.emptyMap());
|
||||
|
||||
public void testPostProcess() {
|
||||
GetAliasesRequest request = new GetAliasesRequest();
|
||||
|
@ -35,7 +42,8 @@ public class TransportGetAliasesActionTests extends ESTestCase {
|
|||
.fPut("b", Collections.singletonList(new AliasMetadata.Builder("y").build()))
|
||||
.build();
|
||||
ImmutableOpenMap<String, List<AliasMetadata>> result =
|
||||
TransportGetAliasesAction.postProcess(request, new String[]{"a", "b", "c"}, aliases);
|
||||
TransportGetAliasesAction.postProcess(request, new String[]{"a", "b", "c"}, aliases, ClusterState.EMPTY_STATE, false,
|
||||
EMPTY_SYSTEM_INDICES);
|
||||
assertThat(result.size(), equalTo(3));
|
||||
assertThat(result.get("a").size(), equalTo(0));
|
||||
assertThat(result.get("b").size(), equalTo(1));
|
||||
|
@ -46,7 +54,8 @@ public class TransportGetAliasesActionTests extends ESTestCase {
|
|||
aliases = ImmutableOpenMap.<String, List<AliasMetadata>>builder()
|
||||
.fPut("b", Collections.singletonList(new AliasMetadata.Builder("y").build()))
|
||||
.build();
|
||||
result = TransportGetAliasesAction.postProcess(request, new String[]{"a", "b", "c"}, aliases);
|
||||
result = TransportGetAliasesAction.postProcess(request, new String[]{"a", "b", "c"}, aliases, ClusterState.EMPTY_STATE, false,
|
||||
EMPTY_SYSTEM_INDICES);
|
||||
assertThat(result.size(), equalTo(3));
|
||||
assertThat(result.get("a").size(), equalTo(0));
|
||||
assertThat(result.get("b").size(), equalTo(1));
|
||||
|
@ -56,9 +65,129 @@ public class TransportGetAliasesActionTests extends ESTestCase {
|
|||
aliases = ImmutableOpenMap.<String, List<AliasMetadata>>builder()
|
||||
.fPut("b", Collections.singletonList(new AliasMetadata.Builder("y").build()))
|
||||
.build();
|
||||
result = TransportGetAliasesAction.postProcess(request, new String[]{"a", "b", "c"}, aliases);
|
||||
result = TransportGetAliasesAction.postProcess(request, new String[]{"a", "b", "c"}, aliases, ClusterState.EMPTY_STATE, false,
|
||||
EMPTY_SYSTEM_INDICES);
|
||||
assertThat(result.size(), equalTo(1));
|
||||
assertThat(result.get("b").size(), equalTo(1));
|
||||
}
|
||||
|
||||
public void testDeprecationWarningEmittedForTotalWildcard() {
|
||||
ClusterState state = systemIndexTestClusterState();
|
||||
|
||||
GetAliasesRequest request = new GetAliasesRequest();
|
||||
ImmutableOpenMap<String, List<AliasMetadata>> aliases = ImmutableOpenMap.<String, List<AliasMetadata>>builder()
|
||||
.fPut(".b", Collections.singletonList(new AliasMetadata.Builder(".y").build()))
|
||||
.fPut("c", Collections.singletonList(new AliasMetadata.Builder("d").build()))
|
||||
.build();
|
||||
final String[] concreteIndices = {"a", ".b", "c"};
|
||||
assertEquals(state.metadata().findAliases(request, concreteIndices), aliases);
|
||||
ImmutableOpenMap<String, List<AliasMetadata>> result =
|
||||
TransportGetAliasesAction.postProcess(request, concreteIndices, aliases, state, false, EMPTY_SYSTEM_INDICES);
|
||||
assertThat(result.size(), equalTo(3));
|
||||
assertThat(result.get("a").size(), equalTo(0));
|
||||
assertThat(result.get(".b").size(), equalTo(1));
|
||||
assertThat(result.get("c").size(), equalTo(1));
|
||||
assertWarnings("this request accesses system indices: [.b], but in a future major version, direct access to system " +
|
||||
"indices will be prevented by default");
|
||||
}
|
||||
|
||||
public void testDeprecationWarningEmittedWhenSystemIndexIsRequested() {
|
||||
ClusterState state = systemIndexTestClusterState();
|
||||
|
||||
GetAliasesRequest request = new GetAliasesRequest();
|
||||
request.indices(".b");
|
||||
ImmutableOpenMap<String, List<AliasMetadata>> aliases = ImmutableOpenMap.<String, List<AliasMetadata>>builder()
|
||||
.fPut(".b", Collections.singletonList(new AliasMetadata.Builder(".y").build()))
|
||||
.build();
|
||||
final String[] concreteIndices = {".b"};
|
||||
assertEquals(state.metadata().findAliases(request, concreteIndices), aliases);
|
||||
ImmutableOpenMap<String, List<AliasMetadata>> result =
|
||||
TransportGetAliasesAction.postProcess(request, concreteIndices, aliases, state, false, EMPTY_SYSTEM_INDICES);
|
||||
assertThat(result.size(), equalTo(1));
|
||||
assertThat(result.get(".b").size(), equalTo(1));
|
||||
assertWarnings("this request accesses system indices: [.b], but in a future major version, direct access to system " +
|
||||
"indices will be prevented by default");
|
||||
}
|
||||
|
||||
public void testDeprecationWarningEmittedWhenSystemIndexIsRequestedByAlias() {
|
||||
ClusterState state = systemIndexTestClusterState();
|
||||
|
||||
GetAliasesRequest request = new GetAliasesRequest(".y");
|
||||
ImmutableOpenMap<String, List<AliasMetadata>> aliases = ImmutableOpenMap.<String, List<AliasMetadata>>builder()
|
||||
.fPut(".b", Collections.singletonList(new AliasMetadata.Builder(".y").build()))
|
||||
.build();
|
||||
final String[] concreteIndices = {"a", ".b", "c"};
|
||||
assertEquals(state.metadata().findAliases(request, concreteIndices), aliases);
|
||||
ImmutableOpenMap<String, List<AliasMetadata>> result =
|
||||
TransportGetAliasesAction.postProcess(request, concreteIndices, aliases, state, false, EMPTY_SYSTEM_INDICES);
|
||||
assertThat(result.size(), equalTo(1));
|
||||
assertThat(result.get(".b").size(), equalTo(1));
|
||||
assertWarnings("this request accesses system indices: [.b], but in a future major version, direct access to system " +
|
||||
"indices will be prevented by default");
|
||||
}
|
||||
|
||||
public void testDeprecationWarningNotEmittedWhenSystemAccessAllowed() {
|
||||
ClusterState state = systemIndexTestClusterState();
|
||||
|
||||
GetAliasesRequest request = new GetAliasesRequest(".y");
|
||||
ImmutableOpenMap<String, List<AliasMetadata>> aliases = ImmutableOpenMap.<String, List<AliasMetadata>>builder()
|
||||
.fPut(".b", Collections.singletonList(new AliasMetadata.Builder(".y").build()))
|
||||
.build();
|
||||
final String[] concreteIndices = {"a", ".b", "c"};
|
||||
assertEquals(state.metadata().findAliases(request, concreteIndices), aliases);
|
||||
ImmutableOpenMap<String, List<AliasMetadata>> result =
|
||||
TransportGetAliasesAction.postProcess(request, concreteIndices, aliases, state, true, EMPTY_SYSTEM_INDICES);
|
||||
assertThat(result.size(), equalTo(1));
|
||||
assertThat(result.get(".b").size(), equalTo(1));
|
||||
}
|
||||
|
||||
/**
|
||||
* Ensures that deprecation warnings are not emitted when
|
||||
*/
|
||||
public void testDeprecationWarningNotEmittedWhenOnlyNonsystemIndexRequested() {
|
||||
ClusterState state = systemIndexTestClusterState();
|
||||
|
||||
GetAliasesRequest request = new GetAliasesRequest();
|
||||
request.indices("c");
|
||||
ImmutableOpenMap<String, List<AliasMetadata>> aliases = ImmutableOpenMap.<String, List<AliasMetadata>>builder()
|
||||
.fPut("c", Collections.singletonList(new AliasMetadata.Builder("d").build()))
|
||||
.build();
|
||||
final String[] concreteIndices = {"c"};
|
||||
assertEquals(state.metadata().findAliases(request, concreteIndices), aliases);
|
||||
ImmutableOpenMap<String, List<AliasMetadata>> result =
|
||||
TransportGetAliasesAction.postProcess(request, concreteIndices, aliases, state, false, EMPTY_SYSTEM_INDICES);
|
||||
assertThat(result.size(), equalTo(1));
|
||||
assertThat(result.get("c").size(), equalTo(1));
|
||||
}
|
||||
|
||||
public void testDeprecationWarningEmittedWhenRequestingNonExistingAliasInSystemPattern() {
|
||||
ClusterState state = systemIndexTestClusterState();
|
||||
SystemIndices systemIndices = new SystemIndices(Collections.singletonMap(this.getTestName(),
|
||||
Collections.singletonList(new SystemIndexDescriptor(".y", "an index that doesn't exist"))));
|
||||
|
||||
GetAliasesRequest request = new GetAliasesRequest(".y");
|
||||
ImmutableOpenMap<String, List<AliasMetadata>> aliases = ImmutableOpenMap.<String, List<AliasMetadata>>builder()
|
||||
.build();
|
||||
final String[] concreteIndices = {};
|
||||
assertEquals(state.metadata().findAliases(request, concreteIndices), aliases);
|
||||
ImmutableOpenMap<String, List<AliasMetadata>> result =
|
||||
TransportGetAliasesAction.postProcess(request, concreteIndices, aliases, state, false, systemIndices);
|
||||
assertThat(result.size(), equalTo(0));
|
||||
assertWarnings("this request accesses aliases with names reserved for system indices: [.y], but in a future major version, direct" +
|
||||
"access to system indices and their aliases will not be allowed");
|
||||
}
|
||||
|
||||
public ClusterState systemIndexTestClusterState() {
|
||||
return ClusterState.builder(ClusterState.EMPTY_STATE)
|
||||
.metadata(Metadata.builder()
|
||||
.put(IndexMetadata.builder("a").settings(settings(Version.CURRENT)).numberOfShards(1).numberOfReplicas(0))
|
||||
.put(IndexMetadata.builder(".b").settings(settings(Version.CURRENT)).numberOfShards(1).numberOfReplicas(0)
|
||||
.system(true).putAlias(AliasMetadata.builder(".y")))
|
||||
.put(IndexMetadata.builder("c").settings(settings(Version.CURRENT)).numberOfShards(1).numberOfReplicas(0)
|
||||
.putAlias(AliasMetadata.builder("d")))
|
||||
.build())
|
||||
.build();
|
||||
}
|
||||
|
||||
|
||||
}
|
||||
|
|
|
@ -68,6 +68,9 @@ public class RestForceMergeActionTests extends RestActionTestCase {
|
|||
.withParams(params)
|
||||
.build();
|
||||
|
||||
// We're not actually testing anything to do with the client, but need to set this so it doesn't fail the test for being unset.
|
||||
verifyingClient.setExecuteVerifier((arg1, arg2) -> null);
|
||||
|
||||
dispatchRequest(request);
|
||||
assertWarnings("setting only_expunge_deletes and max_num_segments at the same time is deprecated " +
|
||||
"and will be rejected in a future version");
|
||||
|
|
|
@ -30,6 +30,7 @@ import org.elasticsearch.common.settings.IndexScopedSettings;
|
|||
import org.elasticsearch.common.settings.Settings;
|
||||
import org.elasticsearch.common.settings.SettingsFilter;
|
||||
import org.elasticsearch.common.settings.SettingsModule;
|
||||
import org.elasticsearch.common.util.concurrent.ThreadContext;
|
||||
import org.elasticsearch.index.Index;
|
||||
import org.elasticsearch.indices.IndicesService;
|
||||
import org.elasticsearch.test.ESSingleNodeTestCase;
|
||||
|
@ -121,6 +122,10 @@ public class GetIndexActionTests extends ESSingleNodeTestCase {
|
|||
}
|
||||
|
||||
static class Resolver extends IndexNameExpressionResolver {
|
||||
Resolver() {
|
||||
super(new ThreadContext(Settings.EMPTY));
|
||||
}
|
||||
|
||||
@Override
|
||||
public String[] concreteIndexNames(ClusterState state, IndicesRequest request) {
|
||||
return request.indices();
|
||||
|
|
|
@ -30,6 +30,8 @@ import org.elasticsearch.cluster.metadata.Metadata;
|
|||
import org.elasticsearch.common.Strings;
|
||||
import org.elasticsearch.common.bytes.BytesReference;
|
||||
import org.elasticsearch.common.collect.Tuple;
|
||||
import org.elasticsearch.common.settings.Settings;
|
||||
import org.elasticsearch.common.util.concurrent.ThreadContext;
|
||||
import org.elasticsearch.common.xcontent.XContentBuilder;
|
||||
import org.elasticsearch.common.xcontent.XContentParser;
|
||||
import org.elasticsearch.common.xcontent.XContentType;
|
||||
|
@ -168,7 +170,8 @@ public class PutMappingRequestTests extends ESTestCase {
|
|||
tuple("alias2", org.elasticsearch.common.collect.List.of(tuple("index2", false), tuple("index3", true)))
|
||||
));
|
||||
PutMappingRequest request = new PutMappingRequest().indices("foo", "alias1", "alias2").writeIndexOnly(true);
|
||||
Index[] indices = TransportPutMappingAction.resolveIndices(cs, request, new IndexNameExpressionResolver());
|
||||
Index[] indices = TransportPutMappingAction.resolveIndices(cs, request,
|
||||
new IndexNameExpressionResolver(new ThreadContext(Settings.EMPTY)));
|
||||
List<String> indexNames = Arrays.stream(indices).map(Index::getName).collect(Collectors.toList());
|
||||
IndexAbstraction expectedDs = cs.metadata().getIndicesLookup().get("foo");
|
||||
// should resolve the data stream and each alias to their respective write indices
|
||||
|
@ -189,7 +192,8 @@ public class PutMappingRequestTests extends ESTestCase {
|
|||
tuple("alias2", org.elasticsearch.common.collect.List.of(tuple("index2", false), tuple("index3", true)))
|
||||
));
|
||||
PutMappingRequest request = new PutMappingRequest().indices("foo", "alias1", "alias2");
|
||||
Index[] indices = TransportPutMappingAction.resolveIndices(cs, request, new IndexNameExpressionResolver());
|
||||
Index[] indices = TransportPutMappingAction.resolveIndices(cs, request,
|
||||
new IndexNameExpressionResolver(new ThreadContext(Settings.EMPTY)));
|
||||
List<String> indexNames = Arrays.stream(indices).map(Index::getName).collect(Collectors.toList());
|
||||
IndexAbstraction expectedDs = cs.metadata().getIndicesLookup().get("foo");
|
||||
List<String> expectedIndices = expectedDs.getIndices().stream().map(im -> im.getIndex().getName()).collect(Collectors.toList());
|
||||
|
@ -212,7 +216,8 @@ public class PutMappingRequestTests extends ESTestCase {
|
|||
tuple("alias2", org.elasticsearch.common.collect.List.of(tuple("index2", false), tuple("index3", true)))
|
||||
));
|
||||
PutMappingRequest request = new PutMappingRequest().indices("foo", "index3").writeIndexOnly(true);
|
||||
Index[] indices = TransportPutMappingAction.resolveIndices(cs, request, new IndexNameExpressionResolver());
|
||||
Index[] indices = TransportPutMappingAction.resolveIndices(cs, request,
|
||||
new IndexNameExpressionResolver(new ThreadContext(Settings.EMPTY)));
|
||||
List<String> indexNames = Arrays.stream(indices).map(Index::getName).collect(Collectors.toList());
|
||||
IndexAbstraction expectedDs = cs.metadata().getIndicesLookup().get("foo");
|
||||
List<String> expectedIndices = expectedDs.getIndices().stream().map(im -> im.getIndex().getName()).collect(Collectors.toList());
|
||||
|
@ -236,7 +241,8 @@ public class PutMappingRequestTests extends ESTestCase {
|
|||
));
|
||||
PutMappingRequest request = new PutMappingRequest().indices("*").writeIndexOnly(true);
|
||||
IllegalArgumentException e = expectThrows(IllegalArgumentException.class,
|
||||
() -> TransportPutMappingAction.resolveIndices(cs2, request, new IndexNameExpressionResolver()));
|
||||
() -> TransportPutMappingAction.resolveIndices(cs2, request,
|
||||
new IndexNameExpressionResolver(new ThreadContext(Settings.EMPTY))));
|
||||
assertThat(e.getMessage(), containsString("The index expression [*] and options provided did not point to a single write-index"));
|
||||
}
|
||||
|
||||
|
@ -255,7 +261,8 @@ public class PutMappingRequestTests extends ESTestCase {
|
|||
));
|
||||
PutMappingRequest request = new PutMappingRequest().indices("alias2").writeIndexOnly(true);
|
||||
IllegalArgumentException e = expectThrows(IllegalArgumentException.class,
|
||||
() -> TransportPutMappingAction.resolveIndices(cs2, request, new IndexNameExpressionResolver()));
|
||||
() -> TransportPutMappingAction.resolveIndices(cs2, request,
|
||||
new IndexNameExpressionResolver(new ThreadContext(Settings.EMPTY))));
|
||||
assertThat(e.getMessage(), containsString("no write index is defined for alias [alias2]"));
|
||||
}
|
||||
|
||||
|
|
|
@ -34,6 +34,7 @@ import org.elasticsearch.cluster.metadata.IndexNameExpressionResolver;
|
|||
import org.elasticsearch.cluster.metadata.Metadata;
|
||||
import org.elasticsearch.common.Strings;
|
||||
import org.elasticsearch.common.settings.Settings;
|
||||
import org.elasticsearch.common.util.concurrent.ThreadContext;
|
||||
import org.elasticsearch.test.ESTestCase;
|
||||
|
||||
import java.util.ArrayList;
|
||||
|
@ -69,7 +70,8 @@ public class ResolveIndexTests extends ESTestCase {
|
|||
};
|
||||
|
||||
private Metadata metadata = buildMetadata(dataStreams, indices);
|
||||
private IndexAbstractionResolver resolver = new IndexAbstractionResolver(new IndexNameExpressionResolver());
|
||||
private IndexAbstractionResolver resolver = new IndexAbstractionResolver(
|
||||
new IndexNameExpressionResolver(new ThreadContext(Settings.EMPTY)));
|
||||
|
||||
public void testResolveStarWithDefaultOptions() {
|
||||
String[] names = new String[] {"*"};
|
||||
|
|
|
@ -51,6 +51,7 @@ import org.elasticsearch.common.compress.CompressedXContent;
|
|||
import org.elasticsearch.common.settings.IndexScopedSettings;
|
||||
import org.elasticsearch.common.settings.Settings;
|
||||
import org.elasticsearch.common.unit.TimeValue;
|
||||
import org.elasticsearch.common.util.concurrent.ThreadContext;
|
||||
import org.elasticsearch.common.xcontent.json.JsonXContent;
|
||||
import org.elasticsearch.env.Environment;
|
||||
import org.elasticsearch.index.Index;
|
||||
|
@ -297,7 +298,7 @@ public class MetadataRolloverServiceTests extends ESTestCase {
|
|||
|
||||
public void testGenerateRolloverIndexName() {
|
||||
String invalidIndexName = randomAlphaOfLength(10) + "A";
|
||||
IndexNameExpressionResolver indexNameExpressionResolver = new IndexNameExpressionResolver();
|
||||
IndexNameExpressionResolver indexNameExpressionResolver = new IndexNameExpressionResolver(new ThreadContext(Settings.EMPTY));
|
||||
expectThrows(IllegalArgumentException.class, () ->
|
||||
MetadataRolloverService.generateRolloverIndexName(invalidIndexName, indexNameExpressionResolver));
|
||||
int num = randomIntBetween(0, 100);
|
||||
|
|
|
@ -30,6 +30,7 @@ import org.elasticsearch.common.settings.IndexScopedSettings;
|
|||
import org.elasticsearch.common.settings.Settings;
|
||||
import org.elasticsearch.common.settings.SettingsFilter;
|
||||
import org.elasticsearch.common.settings.SettingsModule;
|
||||
import org.elasticsearch.common.util.concurrent.ThreadContext;
|
||||
import org.elasticsearch.index.Index;
|
||||
import org.elasticsearch.test.ESTestCase;
|
||||
import org.elasticsearch.test.transport.CapturingTransport;
|
||||
|
@ -129,6 +130,10 @@ public class GetSettingsActionTests extends ESTestCase {
|
|||
}
|
||||
|
||||
static class Resolver extends IndexNameExpressionResolver {
|
||||
Resolver() {
|
||||
super(new ThreadContext(Settings.EMPTY));
|
||||
}
|
||||
|
||||
@Override
|
||||
public String[] concreteIndexNames(ClusterState state, IndicesRequest request) {
|
||||
return request.indices();
|
||||
|
|
|
@ -51,6 +51,7 @@ import org.elasticsearch.common.settings.Settings;
|
|||
import org.elasticsearch.common.unit.TimeValue;
|
||||
import org.elasticsearch.common.util.concurrent.AtomicArray;
|
||||
import org.elasticsearch.common.util.concurrent.EsExecutors;
|
||||
import org.elasticsearch.common.util.concurrent.ThreadContext;
|
||||
import org.elasticsearch.index.IndexNotFoundException;
|
||||
import org.elasticsearch.index.IndexSettings;
|
||||
import org.elasticsearch.index.IndexingPressure;
|
||||
|
@ -147,7 +148,7 @@ public class TransportBulkActionIngestTests extends ESTestCase {
|
|||
null, null, new ActionFilters(Collections.emptySet()), null,
|
||||
new AutoCreateIndex(
|
||||
SETTINGS, new ClusterSettings(SETTINGS, ClusterSettings.BUILT_IN_CLUSTER_SETTINGS),
|
||||
new IndexNameExpressionResolver(),
|
||||
new IndexNameExpressionResolver(new ThreadContext(Settings.EMPTY)),
|
||||
new SystemIndices(emptyMap())
|
||||
), new IndexingPressure(SETTINGS), new SystemIndices(emptyMap())
|
||||
);
|
||||
|
|
|
@ -39,6 +39,7 @@ import org.elasticsearch.cluster.service.ClusterService;
|
|||
import org.elasticsearch.common.Strings;
|
||||
import org.elasticsearch.common.settings.Settings;
|
||||
import org.elasticsearch.common.util.concurrent.AtomicArray;
|
||||
import org.elasticsearch.common.util.concurrent.ThreadContext;
|
||||
import org.elasticsearch.common.xcontent.XContentType;
|
||||
import org.elasticsearch.index.IndexNotFoundException;
|
||||
import org.elasticsearch.rest.action.document.RestBulkAction;
|
||||
|
@ -215,6 +216,10 @@ public class TransportBulkActionTookTests extends ESTestCase {
|
|||
}
|
||||
|
||||
static class Resolver extends IndexNameExpressionResolver {
|
||||
Resolver() {
|
||||
super(new ThreadContext(Settings.EMPTY));
|
||||
}
|
||||
|
||||
@Override
|
||||
public String[] concreteIndexNames(ClusterState state, IndicesRequest request) {
|
||||
return request.indices();
|
||||
|
|
|
@ -37,6 +37,7 @@ import org.elasticsearch.cluster.service.ClusterService;
|
|||
import org.elasticsearch.common.bytes.BytesReference;
|
||||
import org.elasticsearch.common.settings.Settings;
|
||||
import org.elasticsearch.common.util.concurrent.AtomicArray;
|
||||
import org.elasticsearch.common.util.concurrent.ThreadContext;
|
||||
import org.elasticsearch.common.xcontent.XContentFactory;
|
||||
import org.elasticsearch.common.xcontent.XContentHelper;
|
||||
import org.elasticsearch.common.xcontent.XContentType;
|
||||
|
@ -222,6 +223,10 @@ public class TransportMultiGetActionTests extends ESTestCase {
|
|||
|
||||
static class Resolver extends IndexNameExpressionResolver {
|
||||
|
||||
Resolver() {
|
||||
super(new ThreadContext(Settings.EMPTY));
|
||||
}
|
||||
|
||||
@Override
|
||||
public Index concreteSingleIndex(ClusterState state, IndicesRequest request) {
|
||||
return new Index("index1", randomBase64UUID());
|
||||
|
|
|
@ -32,6 +32,7 @@ import org.elasticsearch.common.Randomness;
|
|||
import org.elasticsearch.common.UUIDs;
|
||||
import org.elasticsearch.common.settings.Settings;
|
||||
import org.elasticsearch.common.util.concurrent.AtomicArray;
|
||||
import org.elasticsearch.common.util.concurrent.ThreadContext;
|
||||
import org.elasticsearch.search.internal.InternalSearchResponse;
|
||||
import org.elasticsearch.tasks.Task;
|
||||
import org.elasticsearch.tasks.TaskManager;
|
||||
|
@ -191,6 +192,10 @@ public class MultiSearchActionTookTests extends ESTestCase {
|
|||
}
|
||||
|
||||
static class Resolver extends IndexNameExpressionResolver {
|
||||
Resolver() {
|
||||
super(new ThreadContext(Settings.EMPTY));
|
||||
}
|
||||
|
||||
@Override
|
||||
public String[] concreteIndexNames(ClusterState state, IndicesRequest request) {
|
||||
return request.indices();
|
||||
|
|
|
@ -28,6 +28,7 @@ import org.elasticsearch.cluster.metadata.Metadata;
|
|||
import org.elasticsearch.common.collect.Tuple;
|
||||
import org.elasticsearch.common.settings.ClusterSettings;
|
||||
import org.elasticsearch.common.settings.Settings;
|
||||
import org.elasticsearch.common.util.concurrent.ThreadContext;
|
||||
import org.elasticsearch.index.IndexNotFoundException;
|
||||
import org.elasticsearch.indices.SystemIndexDescriptor;
|
||||
import org.elasticsearch.indices.SystemIndices;
|
||||
|
@ -194,7 +195,8 @@ public class AutoCreateIndexTests extends ESTestCase {
|
|||
|
||||
ClusterSettings clusterSettings = new ClusterSettings(settings,
|
||||
ClusterSettings.BUILT_IN_CLUSTER_SETTINGS);
|
||||
AutoCreateIndex autoCreateIndex = new AutoCreateIndex(settings, clusterSettings, new IndexNameExpressionResolver(),
|
||||
AutoCreateIndex autoCreateIndex = new AutoCreateIndex(settings, clusterSettings,
|
||||
new IndexNameExpressionResolver(new ThreadContext(Settings.EMPTY)),
|
||||
new SystemIndices(org.elasticsearch.common.collect.Map.of()));
|
||||
assertThat(autoCreateIndex.getAutoCreate().isAutoCreateIndex(), equalTo(value));
|
||||
|
||||
|
@ -222,7 +224,7 @@ public class AutoCreateIndexTests extends ESTestCase {
|
|||
SystemIndices systemIndices = new SystemIndices(org.elasticsearch.common.collect.Map.of("plugin",
|
||||
org.elasticsearch.common.collect.List.of(new SystemIndexDescriptor(TEST_SYSTEM_INDEX_NAME, ""))));
|
||||
return new AutoCreateIndex(settings, new ClusterSettings(settings,
|
||||
ClusterSettings.BUILT_IN_CLUSTER_SETTINGS), new IndexNameExpressionResolver(), systemIndices);
|
||||
ClusterSettings.BUILT_IN_CLUSTER_SETTINGS), new IndexNameExpressionResolver(new ThreadContext(Settings.EMPTY)), systemIndices);
|
||||
}
|
||||
|
||||
private void expectNotMatch(ClusterState clusterState, AutoCreateIndex autoCreateIndex, String index) {
|
||||
|
|
|
@ -50,6 +50,7 @@ import org.elasticsearch.cluster.service.ClusterService;
|
|||
import org.elasticsearch.common.io.stream.StreamInput;
|
||||
import org.elasticsearch.common.io.stream.Writeable;
|
||||
import org.elasticsearch.common.settings.Settings;
|
||||
import org.elasticsearch.common.util.concurrent.ThreadContext;
|
||||
import org.elasticsearch.index.Index;
|
||||
import org.elasticsearch.index.shard.ShardId;
|
||||
import org.elasticsearch.rest.RestStatus;
|
||||
|
@ -177,6 +178,10 @@ public class TransportBroadcastByNodeActionTests extends ESTestCase {
|
|||
}
|
||||
|
||||
class MyResolver extends IndexNameExpressionResolver {
|
||||
MyResolver() {
|
||||
super(new ThreadContext(Settings.EMPTY));
|
||||
}
|
||||
|
||||
@Override
|
||||
public String[] concreteIndexNames(ClusterState state, IndicesRequest request) {
|
||||
return request.indices();
|
||||
|
|
|
@ -42,7 +42,9 @@ import org.elasticsearch.cluster.node.DiscoveryNodes;
|
|||
import org.elasticsearch.cluster.service.ClusterService;
|
||||
import org.elasticsearch.common.io.stream.StreamInput;
|
||||
import org.elasticsearch.common.io.stream.StreamOutput;
|
||||
import org.elasticsearch.common.settings.Settings;
|
||||
import org.elasticsearch.common.unit.TimeValue;
|
||||
import org.elasticsearch.common.util.concurrent.ThreadContext;
|
||||
import org.elasticsearch.discovery.MasterNotDiscoveredException;
|
||||
import org.elasticsearch.node.NodeClosedException;
|
||||
import org.elasticsearch.rest.RestStatus;
|
||||
|
@ -171,7 +173,7 @@ public class TransportMasterNodeActionTests extends ESTestCase {
|
|||
Action(String actionName, TransportService transportService, ClusterService clusterService,
|
||||
ThreadPool threadPool) {
|
||||
super(actionName, transportService, clusterService, threadPool,
|
||||
new ActionFilters(new HashSet<>()), Request::new, new IndexNameExpressionResolver());
|
||||
new ActionFilters(new HashSet<>()), Request::new, new IndexNameExpressionResolver(new ThreadContext(Settings.EMPTY)));
|
||||
}
|
||||
|
||||
@Override
|
||||
|
|
|
@ -42,6 +42,7 @@ import org.elasticsearch.common.network.NetworkService;
|
|||
import org.elasticsearch.common.settings.Settings;
|
||||
import org.elasticsearch.common.util.PageCacheRecycler;
|
||||
import org.elasticsearch.common.util.concurrent.ConcurrentCollections;
|
||||
import org.elasticsearch.common.util.concurrent.ThreadContext;
|
||||
import org.elasticsearch.core.internal.io.IOUtils;
|
||||
import org.elasticsearch.index.shard.ShardId;
|
||||
import org.elasticsearch.indices.breaker.CircuitBreakerService;
|
||||
|
@ -103,7 +104,7 @@ public class BroadcastReplicationTests extends ESTestCase {
|
|||
transportService.start();
|
||||
transportService.acceptIncomingRequests();
|
||||
broadcastReplicationAction = new TestBroadcastReplicationAction(clusterService, transportService,
|
||||
new ActionFilters(new HashSet<>()), new IndexNameExpressionResolver(), null);
|
||||
new ActionFilters(new HashSet<>()), new IndexNameExpressionResolver(new ThreadContext(Settings.EMPTY)), null);
|
||||
}
|
||||
|
||||
@Override
|
||||
|
|
|
@ -39,7 +39,9 @@ import org.elasticsearch.cluster.service.ClusterService;
|
|||
import org.elasticsearch.common.io.stream.StreamInput;
|
||||
import org.elasticsearch.common.io.stream.StreamOutput;
|
||||
import org.elasticsearch.common.io.stream.Writeable;
|
||||
import org.elasticsearch.common.settings.Settings;
|
||||
import org.elasticsearch.common.unit.TimeValue;
|
||||
import org.elasticsearch.common.util.concurrent.ThreadContext;
|
||||
import org.elasticsearch.index.shard.ShardId;
|
||||
import org.elasticsearch.rest.RestStatus;
|
||||
import org.elasticsearch.test.ESTestCase;
|
||||
|
@ -136,6 +138,10 @@ public class TransportInstanceSingleOperationActionTests extends ESTestCase {
|
|||
}
|
||||
|
||||
class MyResolver extends IndexNameExpressionResolver {
|
||||
MyResolver() {
|
||||
super(new ThreadContext(Settings.EMPTY));
|
||||
}
|
||||
|
||||
@Override
|
||||
public String[] concreteIndexNames(ClusterState state, IndicesRequest request) {
|
||||
return request.indices();
|
||||
|
|
|
@ -38,6 +38,7 @@ import org.elasticsearch.cluster.service.ClusterService;
|
|||
import org.elasticsearch.common.bytes.BytesReference;
|
||||
import org.elasticsearch.common.settings.Settings;
|
||||
import org.elasticsearch.common.util.concurrent.AtomicArray;
|
||||
import org.elasticsearch.common.util.concurrent.ThreadContext;
|
||||
import org.elasticsearch.common.xcontent.XContentFactory;
|
||||
import org.elasticsearch.common.xcontent.XContentHelper;
|
||||
import org.elasticsearch.common.xcontent.XContentType;
|
||||
|
@ -223,6 +224,10 @@ public class TransportMultiTermVectorsActionTests extends ESTestCase {
|
|||
|
||||
static class Resolver extends IndexNameExpressionResolver {
|
||||
|
||||
Resolver() {
|
||||
super(new ThreadContext(Settings.EMPTY));
|
||||
}
|
||||
|
||||
@Override
|
||||
public Index concreteSingleIndex(ClusterState state, IndicesRequest request) {
|
||||
return new Index("index1", randomBase64UUID());
|
||||
|
|
|
@ -51,6 +51,7 @@ import org.elasticsearch.common.settings.Setting;
|
|||
import org.elasticsearch.common.settings.Setting.Property;
|
||||
import org.elasticsearch.common.settings.Settings;
|
||||
import org.elasticsearch.common.settings.SettingsModule;
|
||||
import org.elasticsearch.common.util.concurrent.ThreadContext;
|
||||
import org.elasticsearch.gateway.GatewayAllocator;
|
||||
import org.elasticsearch.plugins.ClusterPlugin;
|
||||
import org.elasticsearch.test.gateway.TestGatewayAllocator;
|
||||
|
@ -65,8 +66,23 @@ import java.util.function.Supplier;
|
|||
|
||||
public class ClusterModuleTests extends ModuleTestCase {
|
||||
private ClusterInfoService clusterInfoService = EmptyClusterInfoService.INSTANCE;
|
||||
private ClusterService clusterService = new ClusterService(Settings.EMPTY,
|
||||
new ClusterSettings(Settings.EMPTY, ClusterSettings.BUILT_IN_CLUSTER_SETTINGS), null);
|
||||
private ClusterService clusterService;
|
||||
private ThreadContext threadContext;
|
||||
|
||||
@Override
|
||||
public void setUp() throws Exception {
|
||||
super.setUp();
|
||||
threadContext = new ThreadContext(Settings.EMPTY);
|
||||
clusterService = new ClusterService(Settings.EMPTY,
|
||||
new ClusterSettings(Settings.EMPTY, ClusterSettings.BUILT_IN_CLUSTER_SETTINGS), null);
|
||||
}
|
||||
|
||||
@Override
|
||||
public void tearDown() throws Exception {
|
||||
super.tearDown();
|
||||
clusterService.close();
|
||||
}
|
||||
|
||||
static class FakeAllocationDecider extends AllocationDecider {
|
||||
protected FakeAllocationDecider() {
|
||||
}
|
||||
|
@ -121,7 +137,7 @@ public class ClusterModuleTests extends ModuleTestCase {
|
|||
public Collection<AllocationDecider> createAllocationDeciders(Settings settings, ClusterSettings clusterSettings) {
|
||||
return Collections.singletonList(new EnableAllocationDecider(settings, clusterSettings));
|
||||
}
|
||||
}), clusterInfoService, null));
|
||||
}), clusterInfoService, null, threadContext));
|
||||
assertEquals(e.getMessage(),
|
||||
"Cannot specify allocation decider [" + EnableAllocationDecider.class.getName() + "] twice");
|
||||
}
|
||||
|
@ -133,7 +149,7 @@ public class ClusterModuleTests extends ModuleTestCase {
|
|||
public Collection<AllocationDecider> createAllocationDeciders(Settings settings, ClusterSettings clusterSettings) {
|
||||
return Collections.singletonList(new FakeAllocationDecider());
|
||||
}
|
||||
}), clusterInfoService, null);
|
||||
}), clusterInfoService, null, threadContext);
|
||||
assertTrue(module.deciderList.stream().anyMatch(d -> d.getClass().equals(FakeAllocationDecider.class)));
|
||||
}
|
||||
|
||||
|
@ -145,7 +161,7 @@ public class ClusterModuleTests extends ModuleTestCase {
|
|||
return Collections.singletonMap(name, supplier);
|
||||
}
|
||||
}
|
||||
), clusterInfoService, null);
|
||||
), clusterInfoService, null, threadContext);
|
||||
}
|
||||
|
||||
public void testRegisterShardsAllocator() {
|
||||
|
@ -163,7 +179,7 @@ public class ClusterModuleTests extends ModuleTestCase {
|
|||
public void testUnknownShardsAllocator() {
|
||||
Settings settings = Settings.builder().put(ClusterModule.SHARDS_ALLOCATOR_TYPE_SETTING.getKey(), "dne").build();
|
||||
IllegalArgumentException e = expectThrows(IllegalArgumentException.class, () ->
|
||||
new ClusterModule(settings, clusterService, Collections.emptyList(), clusterInfoService, null));
|
||||
new ClusterModule(settings, clusterService, Collections.emptyList(), clusterInfoService, null, threadContext));
|
||||
assertEquals("Unknown ShardsAllocator [dne]", e.getMessage());
|
||||
}
|
||||
|
||||
|
@ -231,14 +247,15 @@ public class ClusterModuleTests extends ModuleTestCase {
|
|||
|
||||
public void testRejectsReservedExistingShardsAllocatorName() {
|
||||
final ClusterModule clusterModule = new ClusterModule(Settings.EMPTY, clusterService,
|
||||
Collections.singletonList(existingShardsAllocatorPlugin(GatewayAllocator.ALLOCATOR_NAME)), clusterInfoService, null);
|
||||
Collections.singletonList(existingShardsAllocatorPlugin(GatewayAllocator.ALLOCATOR_NAME)), clusterInfoService, null,
|
||||
threadContext);
|
||||
expectThrows(IllegalArgumentException.class, () -> clusterModule.setExistingShardsAllocators(new TestGatewayAllocator()));
|
||||
}
|
||||
|
||||
public void testRejectsDuplicateExistingShardsAllocatorName() {
|
||||
final ClusterModule clusterModule = new ClusterModule(Settings.EMPTY, clusterService,
|
||||
Arrays.asList(existingShardsAllocatorPlugin("duplicate"), existingShardsAllocatorPlugin("duplicate")), clusterInfoService,
|
||||
null);
|
||||
Arrays.asList(existingShardsAllocatorPlugin("duplicate"), existingShardsAllocatorPlugin("duplicate")), clusterInfoService, null,
|
||||
threadContext);
|
||||
expectThrows(IllegalArgumentException.class, () -> clusterModule.setExistingShardsAllocators(new TestGatewayAllocator()));
|
||||
}
|
||||
|
||||
|
|
|
@ -47,6 +47,7 @@ import org.elasticsearch.common.collect.ImmutableOpenIntMap;
|
|||
import org.elasticsearch.common.io.stream.BytesStreamOutput;
|
||||
import org.elasticsearch.common.io.stream.StreamInput;
|
||||
import org.elasticsearch.common.settings.Settings;
|
||||
import org.elasticsearch.common.util.concurrent.ThreadContext;
|
||||
import org.elasticsearch.common.util.set.Sets;
|
||||
import org.elasticsearch.test.ESTestCase;
|
||||
import org.elasticsearch.test.gateway.TestGatewayAllocator;
|
||||
|
@ -78,7 +79,8 @@ import static org.hamcrest.Matchers.is;
|
|||
import static org.hamcrest.Matchers.lessThanOrEqualTo;
|
||||
|
||||
public class ClusterStateHealthTests extends ESTestCase {
|
||||
private final IndexNameExpressionResolver indexNameExpressionResolver = new IndexNameExpressionResolver();
|
||||
private final IndexNameExpressionResolver indexNameExpressionResolver =
|
||||
new IndexNameExpressionResolver(new ThreadContext(Settings.EMPTY));
|
||||
|
||||
private static ThreadPool threadPool;
|
||||
|
||||
|
|
|
@ -43,7 +43,8 @@ public class DateMathExpressionResolverTests extends ESTestCase {
|
|||
|
||||
private final DateMathExpressionResolver expressionResolver = new DateMathExpressionResolver();
|
||||
private final Context context = new Context(
|
||||
ClusterState.builder(new ClusterName("_name")).build(), IndicesOptions.strictExpand()
|
||||
ClusterState.builder(new ClusterName("_name")).build(), IndicesOptions.strictExpand(),
|
||||
false
|
||||
);
|
||||
|
||||
public void testNormal() throws Exception {
|
||||
|
@ -146,7 +147,7 @@ public class DateMathExpressionResolverTests extends ESTestCase {
|
|||
// rounding to today 00:00
|
||||
now = DateTime.now(UTC).withHourOfDay(0).withMinuteOfHour(0).withSecondOfMinute(0);
|
||||
}
|
||||
Context context = new Context(this.context.getState(), this.context.getOptions(), now.getMillis());
|
||||
Context context = new Context(this.context.getState(), this.context.getOptions(), now.getMillis(), false);
|
||||
List<String> results = expressionResolver.resolve(context, Arrays.asList("<.marvel-{now/d{yyyy.MM.dd|" + timeZone.getID() + "}}>"));
|
||||
assertThat(results.size(), equalTo(1));
|
||||
logger.info("timezone: [{}], now [{}], name: [{}]", timeZone, now, results.get(0));
|
||||
|
|
|
@ -36,12 +36,12 @@ public class IndexAbstractionTests extends ESTestCase {
|
|||
final String hiddenAliasName = "hidden_alias";
|
||||
AliasMetadata hiddenAliasMetadata = new AliasMetadata.Builder(hiddenAliasName).isHidden(true).build();
|
||||
|
||||
IndexMetadata hidden1 = buildIndexWithAlias("hidden1", hiddenAliasName, true);
|
||||
IndexMetadata hidden2 = buildIndexWithAlias("hidden2", hiddenAliasName, true);
|
||||
IndexMetadata hidden3 = buildIndexWithAlias("hidden3", hiddenAliasName, true);
|
||||
IndexMetadata hidden1 = buildIndexWithAlias("hidden1", hiddenAliasName, true, Version.CURRENT, false);
|
||||
IndexMetadata hidden2 = buildIndexWithAlias("hidden2", hiddenAliasName, true, Version.CURRENT, false);
|
||||
IndexMetadata hidden3 = buildIndexWithAlias("hidden3", hiddenAliasName, true, Version.CURRENT, false);
|
||||
|
||||
IndexMetadata indexWithNonHiddenAlias = buildIndexWithAlias("nonhidden1", hiddenAliasName, false);
|
||||
IndexMetadata indexWithUnspecifiedAlias = buildIndexWithAlias("nonhidden2", hiddenAliasName, null);
|
||||
IndexMetadata indexWithNonHiddenAlias = buildIndexWithAlias("nonhidden1", hiddenAliasName, false, Version.CURRENT, false);
|
||||
IndexMetadata indexWithUnspecifiedAlias = buildIndexWithAlias("nonhidden2", hiddenAliasName, null, Version.CURRENT, false);
|
||||
|
||||
{
|
||||
IndexAbstraction.Alias allHidden = new IndexAbstraction.Alias(hiddenAliasMetadata, hidden1);
|
||||
|
@ -116,13 +116,15 @@ public class IndexAbstractionTests extends ESTestCase {
|
|||
}
|
||||
}
|
||||
|
||||
private IndexMetadata buildIndexWithAlias(String indexName, String aliasName, @Nullable Boolean aliasIsHidden) {
|
||||
private IndexMetadata buildIndexWithAlias(String indexName, String aliasName, @Nullable Boolean aliasIsHidden,
|
||||
Version indexCreationVersion, boolean isSystem) {
|
||||
final AliasMetadata.Builder aliasMetadata = new AliasMetadata.Builder(aliasName);
|
||||
if (Objects.nonNull(aliasIsHidden) || randomBoolean()) {
|
||||
aliasMetadata.isHidden(aliasIsHidden);
|
||||
}
|
||||
return new IndexMetadata.Builder(indexName)
|
||||
.settings(settings(Version.CURRENT))
|
||||
.settings(settings(indexCreationVersion))
|
||||
.system(isSystem)
|
||||
.numberOfShards(1)
|
||||
.numberOfReplicas(0)
|
||||
.putAlias(aliasMetadata)
|
||||
|
|
|
@ -19,10 +19,13 @@
|
|||
|
||||
package org.elasticsearch.cluster.metadata;
|
||||
|
||||
import org.elasticsearch.common.settings.Settings;
|
||||
import org.elasticsearch.common.util.concurrent.ThreadContext;
|
||||
|
||||
public class IndexNameExpressionResolverAliasIterationTests extends IndexNameExpressionResolverTests {
|
||||
|
||||
protected IndexNameExpressionResolver getIndexNameExpressionResolver() {
|
||||
return new IndexNameExpressionResolver() {
|
||||
protected IndexNameExpressionResolver createIndexNameExpressionResolver() {
|
||||
return new IndexNameExpressionResolver(new ThreadContext(Settings.EMPTY)) {
|
||||
@Override
|
||||
boolean iterateIndexAliases(int indexAliasesSize, int resolvedExpressionsSize) {
|
||||
return true;
|
||||
|
|
|
@ -19,10 +19,13 @@
|
|||
|
||||
package org.elasticsearch.cluster.metadata;
|
||||
|
||||
import org.elasticsearch.common.settings.Settings;
|
||||
import org.elasticsearch.common.util.concurrent.ThreadContext;
|
||||
|
||||
public class IndexNameExpressionResolverExpressionsIterationTests extends IndexNameExpressionResolverTests {
|
||||
|
||||
protected IndexNameExpressionResolver getIndexNameExpressionResolver() {
|
||||
return new IndexNameExpressionResolver() {
|
||||
protected IndexNameExpressionResolver createIndexNameExpressionResolver() {
|
||||
return new IndexNameExpressionResolver(new ThreadContext(Settings.EMPTY)) {
|
||||
@Override
|
||||
boolean iterateIndexAliases(int indexAliasesSize, int resolvedExpressionsSize) {
|
||||
return false;
|
||||
|
|
|
@ -26,6 +26,7 @@ import org.elasticsearch.action.admin.indices.alias.IndicesAliasesRequest;
|
|||
import org.elasticsearch.action.admin.indices.delete.DeleteIndexRequest;
|
||||
import org.elasticsearch.action.delete.DeleteRequest;
|
||||
import org.elasticsearch.action.index.IndexRequest;
|
||||
import org.elasticsearch.action.search.SearchRequest;
|
||||
import org.elasticsearch.action.support.IndicesOptions;
|
||||
import org.elasticsearch.action.update.UpdateRequest;
|
||||
import org.elasticsearch.cluster.ClusterName;
|
||||
|
@ -33,6 +34,7 @@ import org.elasticsearch.cluster.ClusterState;
|
|||
import org.elasticsearch.cluster.metadata.IndexMetadata.State;
|
||||
import org.elasticsearch.common.Strings;
|
||||
import org.elasticsearch.common.settings.Settings;
|
||||
import org.elasticsearch.common.util.concurrent.ThreadContext;
|
||||
import org.elasticsearch.index.Index;
|
||||
import org.elasticsearch.index.IndexNotFoundException;
|
||||
import org.elasticsearch.index.IndexSettings;
|
||||
|
@ -48,10 +50,12 @@ import java.util.HashSet;
|
|||
import java.util.List;
|
||||
import java.util.Set;
|
||||
import java.util.function.Function;
|
||||
import java.util.stream.Collectors;
|
||||
|
||||
import static org.elasticsearch.cluster.DataStreamTestHelper.createBackingIndex;
|
||||
import static org.elasticsearch.cluster.DataStreamTestHelper.createTimestampField;
|
||||
import static org.elasticsearch.cluster.metadata.IndexMetadata.INDEX_HIDDEN_SETTING;
|
||||
import static org.elasticsearch.cluster.metadata.IndexNameExpressionResolver.SYSTEM_INDEX_ACCESS_CONTROL_HEADER_KEY;
|
||||
import static org.elasticsearch.common.util.set.Sets.newHashSet;
|
||||
import static org.hamcrest.Matchers.arrayContaining;
|
||||
import static org.hamcrest.Matchers.arrayContainingInAnyOrder;
|
||||
|
@ -67,15 +71,21 @@ import static org.hamcrest.Matchers.notNullValue;
|
|||
|
||||
public class IndexNameExpressionResolverTests extends ESTestCase {
|
||||
private IndexNameExpressionResolver indexNameExpressionResolver;
|
||||
private ThreadContext threadContext;
|
||||
|
||||
protected IndexNameExpressionResolver getIndexNameExpressionResolver() {
|
||||
return new IndexNameExpressionResolver();
|
||||
private ThreadContext createThreadContext() {
|
||||
return new ThreadContext(Settings.EMPTY);
|
||||
}
|
||||
|
||||
protected IndexNameExpressionResolver createIndexNameExpressionResolver(ThreadContext threadContext) {
|
||||
return new IndexNameExpressionResolver(threadContext);
|
||||
}
|
||||
|
||||
@Override
|
||||
public void setUp() throws Exception {
|
||||
super.setUp();
|
||||
indexNameExpressionResolver = getIndexNameExpressionResolver();
|
||||
threadContext = createThreadContext();
|
||||
indexNameExpressionResolver = createIndexNameExpressionResolver(threadContext);
|
||||
}
|
||||
|
||||
public void testIndexOptionsStrict() {
|
||||
|
@ -89,7 +99,7 @@ public class IndexNameExpressionResolverTests extends ESTestCase {
|
|||
|
||||
IndicesOptions[] indicesOptions = new IndicesOptions[]{ IndicesOptions.strictExpandOpen(), IndicesOptions.strictExpand()};
|
||||
for (IndicesOptions options : indicesOptions) {
|
||||
IndexNameExpressionResolver.Context context = new IndexNameExpressionResolver.Context(state, options);
|
||||
IndexNameExpressionResolver.Context context = new IndexNameExpressionResolver.Context(state, options, false);
|
||||
String[] results = indexNameExpressionResolver.concreteIndexNames(context, "foo");
|
||||
assertEquals(1, results.length);
|
||||
assertEquals("foo", results[0]);
|
||||
|
@ -138,26 +148,27 @@ public class IndexNameExpressionResolverTests extends ESTestCase {
|
|||
assertEquals("foo", results[0]);
|
||||
}
|
||||
|
||||
IndexNameExpressionResolver.Context context = new IndexNameExpressionResolver.Context(state, IndicesOptions.strictExpandOpen());
|
||||
IndexNameExpressionResolver.Context context =
|
||||
new IndexNameExpressionResolver.Context(state, IndicesOptions.strictExpandOpen(), false);
|
||||
String[] results = indexNameExpressionResolver.concreteIndexNames(context, Strings.EMPTY_ARRAY);
|
||||
assertEquals(3, results.length);
|
||||
|
||||
results = indexNameExpressionResolver.concreteIndexNames(context, (String[])null);
|
||||
assertEquals(3, results.length);
|
||||
|
||||
context = new IndexNameExpressionResolver.Context(state, IndicesOptions.strictExpand());
|
||||
context = new IndexNameExpressionResolver.Context(state, IndicesOptions.strictExpand(), false);
|
||||
results = indexNameExpressionResolver.concreteIndexNames(context, Strings.EMPTY_ARRAY);
|
||||
assertEquals(4, results.length);
|
||||
|
||||
results = indexNameExpressionResolver.concreteIndexNames(context, (String[])null);
|
||||
assertEquals(4, results.length);
|
||||
|
||||
context = new IndexNameExpressionResolver.Context(state, IndicesOptions.strictExpandOpen());
|
||||
context = new IndexNameExpressionResolver.Context(state, IndicesOptions.strictExpandOpen(), false);
|
||||
results = indexNameExpressionResolver.concreteIndexNames(context, "foofoo*");
|
||||
assertEquals(3, results.length);
|
||||
assertThat(results, arrayContainingInAnyOrder("foo", "foobar", "foofoo"));
|
||||
|
||||
context = new IndexNameExpressionResolver.Context(state, IndicesOptions.strictExpand());
|
||||
context = new IndexNameExpressionResolver.Context(state, IndicesOptions.strictExpand(), false);
|
||||
results = indexNameExpressionResolver.concreteIndexNames(context, "foofoo*");
|
||||
assertEquals(4, results.length);
|
||||
assertThat(results, arrayContainingInAnyOrder("foo", "foobar", "foofoo", "foofoo-closed"));
|
||||
|
@ -173,7 +184,7 @@ public class IndexNameExpressionResolverTests extends ESTestCase {
|
|||
|
||||
IndicesOptions[] indicesOptions = new IndicesOptions[]{IndicesOptions.lenientExpandOpen(), IndicesOptions.lenientExpand()};
|
||||
for (IndicesOptions options : indicesOptions) {
|
||||
IndexNameExpressionResolver.Context context = new IndexNameExpressionResolver.Context(state, options);
|
||||
IndexNameExpressionResolver.Context context = new IndexNameExpressionResolver.Context(state, options, false);
|
||||
String[] results = indexNameExpressionResolver.concreteIndexNames(context, "foo");
|
||||
assertEquals(1, results.length);
|
||||
assertEquals("foo", results[0]);
|
||||
|
@ -210,20 +221,21 @@ public class IndexNameExpressionResolverTests extends ESTestCase {
|
|||
assertEquals("foo", results[0]);
|
||||
}
|
||||
|
||||
IndexNameExpressionResolver.Context context = new IndexNameExpressionResolver.Context(state, IndicesOptions.lenientExpandOpen());
|
||||
IndexNameExpressionResolver.Context context =
|
||||
new IndexNameExpressionResolver.Context(state, IndicesOptions.lenientExpandOpen(), false);
|
||||
String[] results = indexNameExpressionResolver.concreteIndexNames(context, Strings.EMPTY_ARRAY);
|
||||
assertEquals(3, results.length);
|
||||
|
||||
context = new IndexNameExpressionResolver.Context(state, IndicesOptions.lenientExpand());
|
||||
context = new IndexNameExpressionResolver.Context(state, IndicesOptions.lenientExpand(), false);
|
||||
results = indexNameExpressionResolver.concreteIndexNames(context, Strings.EMPTY_ARRAY);
|
||||
assertEquals(Arrays.toString(results), 4, results.length);
|
||||
|
||||
context = new IndexNameExpressionResolver.Context(state, IndicesOptions.lenientExpandOpen());
|
||||
context = new IndexNameExpressionResolver.Context(state, IndicesOptions.lenientExpandOpen(), false);
|
||||
results = indexNameExpressionResolver.concreteIndexNames(context, "foofoo*");
|
||||
assertEquals(3, results.length);
|
||||
assertThat(results, arrayContainingInAnyOrder("foo", "foobar", "foofoo"));
|
||||
|
||||
context = new IndexNameExpressionResolver.Context(state, IndicesOptions.lenientExpand());
|
||||
context = new IndexNameExpressionResolver.Context(state, IndicesOptions.lenientExpand(), false);
|
||||
results = indexNameExpressionResolver.concreteIndexNames(context, "foofoo*");
|
||||
assertEquals(4, results.length);
|
||||
assertThat(results, arrayContainingInAnyOrder("foo", "foobar", "foofoo", "foofoo-closed"));
|
||||
|
@ -242,7 +254,7 @@ public class IndexNameExpressionResolverTests extends ESTestCase {
|
|||
IndicesOptions[] indicesOptions = new IndicesOptions[]{expandOpen, expand};
|
||||
|
||||
for (IndicesOptions options : indicesOptions) {
|
||||
IndexNameExpressionResolver.Context context = new IndexNameExpressionResolver.Context(state, options);
|
||||
IndexNameExpressionResolver.Context context = new IndexNameExpressionResolver.Context(state, options, false);
|
||||
String[] results = indexNameExpressionResolver.concreteIndexNames(context, "foo");
|
||||
assertEquals(1, results.length);
|
||||
assertEquals("foo", results[0]);
|
||||
|
@ -264,11 +276,11 @@ public class IndexNameExpressionResolverTests extends ESTestCase {
|
|||
}
|
||||
}
|
||||
|
||||
IndexNameExpressionResolver.Context context = new IndexNameExpressionResolver.Context(state, expandOpen);
|
||||
IndexNameExpressionResolver.Context context = new IndexNameExpressionResolver.Context(state, expandOpen, false);
|
||||
String[] results = indexNameExpressionResolver.concreteIndexNames(context, Strings.EMPTY_ARRAY);
|
||||
assertEquals(3, results.length);
|
||||
|
||||
context = new IndexNameExpressionResolver.Context(state, expand);
|
||||
context = new IndexNameExpressionResolver.Context(state, expand, false);
|
||||
results = indexNameExpressionResolver.concreteIndexNames(context, Strings.EMPTY_ARRAY);
|
||||
assertEquals(4, results.length);
|
||||
}
|
||||
|
@ -286,7 +298,7 @@ public class IndexNameExpressionResolverTests extends ESTestCase {
|
|||
|
||||
// Only closed
|
||||
IndicesOptions options = IndicesOptions.fromOptions(false, true, false, true, false);
|
||||
IndexNameExpressionResolver.Context context = new IndexNameExpressionResolver.Context(state, options);
|
||||
IndexNameExpressionResolver.Context context = new IndexNameExpressionResolver.Context(state, options, false);
|
||||
String[] results = indexNameExpressionResolver.concreteIndexNames(context, Strings.EMPTY_ARRAY);
|
||||
assertEquals(1, results.length);
|
||||
assertEquals("foo", results[0]);
|
||||
|
@ -311,7 +323,7 @@ public class IndexNameExpressionResolverTests extends ESTestCase {
|
|||
|
||||
// Only open
|
||||
options = IndicesOptions.fromOptions(false, true, true, false, false);
|
||||
context = new IndexNameExpressionResolver.Context(state, options);
|
||||
context = new IndexNameExpressionResolver.Context(state, options, false);
|
||||
results = indexNameExpressionResolver.concreteIndexNames(context, Strings.EMPTY_ARRAY);
|
||||
assertEquals(2, results.length);
|
||||
assertThat(results, arrayContainingInAnyOrder("bar", "foobar"));
|
||||
|
@ -335,7 +347,7 @@ public class IndexNameExpressionResolverTests extends ESTestCase {
|
|||
|
||||
// Open and closed
|
||||
options = IndicesOptions.fromOptions(false, true, true, true, false);
|
||||
context = new IndexNameExpressionResolver.Context(state, options);
|
||||
context = new IndexNameExpressionResolver.Context(state, options, false);
|
||||
results = indexNameExpressionResolver.concreteIndexNames(context, Strings.EMPTY_ARRAY);
|
||||
assertEquals(3, results.length);
|
||||
assertThat(results, arrayContainingInAnyOrder("bar", "foobar", "foo"));
|
||||
|
@ -374,7 +386,7 @@ public class IndexNameExpressionResolverTests extends ESTestCase {
|
|||
|
||||
// open closed and hidden
|
||||
options = IndicesOptions.fromOptions(false, true, true, true, true);
|
||||
context = new IndexNameExpressionResolver.Context(state, options);
|
||||
context = new IndexNameExpressionResolver.Context(state, options, false);
|
||||
results = indexNameExpressionResolver.concreteIndexNames(context, Strings.EMPTY_ARRAY);
|
||||
assertEquals(7, results.length);
|
||||
assertThat(results, arrayContainingInAnyOrder("bar", "foobar", "foo", "hidden", "hidden-closed", ".hidden", ".hidden-closed"));
|
||||
|
@ -416,7 +428,7 @@ public class IndexNameExpressionResolverTests extends ESTestCase {
|
|||
|
||||
// open and hidden
|
||||
options = IndicesOptions.fromOptions(false, true, true, false, true);
|
||||
context = new IndexNameExpressionResolver.Context(state, options);
|
||||
context = new IndexNameExpressionResolver.Context(state, options, false);
|
||||
results = indexNameExpressionResolver.concreteIndexNames(context, Strings.EMPTY_ARRAY);
|
||||
assertEquals(4, results.length);
|
||||
assertThat(results, arrayContainingInAnyOrder("bar", "foobar", "hidden", ".hidden"));
|
||||
|
@ -435,7 +447,7 @@ public class IndexNameExpressionResolverTests extends ESTestCase {
|
|||
|
||||
// closed and hidden
|
||||
options = IndicesOptions.fromOptions(false, true, false, true, true);
|
||||
context = new IndexNameExpressionResolver.Context(state, options);
|
||||
context = new IndexNameExpressionResolver.Context(state, options, false);
|
||||
results = indexNameExpressionResolver.concreteIndexNames(context, Strings.EMPTY_ARRAY);
|
||||
assertEquals(3, results.length);
|
||||
assertThat(results, arrayContainingInAnyOrder("foo", "hidden-closed", ".hidden-closed"));
|
||||
|
@ -454,7 +466,7 @@ public class IndexNameExpressionResolverTests extends ESTestCase {
|
|||
|
||||
// only hidden
|
||||
options = IndicesOptions.fromOptions(false, true, false, false, true);
|
||||
context = new IndexNameExpressionResolver.Context(state, options);
|
||||
context = new IndexNameExpressionResolver.Context(state, options, false);
|
||||
results = indexNameExpressionResolver.concreteIndexNames(context, Strings.EMPTY_ARRAY);
|
||||
assertThat(results, emptyArray());
|
||||
|
||||
|
@ -468,7 +480,7 @@ public class IndexNameExpressionResolverTests extends ESTestCase {
|
|||
assertThat(results, arrayContainingInAnyOrder("hidden-closed"));
|
||||
|
||||
options = IndicesOptions.fromOptions(false, false, true, true, true);
|
||||
IndexNameExpressionResolver.Context context2 = new IndexNameExpressionResolver.Context(state, options);
|
||||
IndexNameExpressionResolver.Context context2 = new IndexNameExpressionResolver.Context(state, options, false);
|
||||
IndexNotFoundException infe = expectThrows(IndexNotFoundException.class,
|
||||
() -> indexNameExpressionResolver.concreteIndexNames(context2, "-*"));
|
||||
assertThat(infe.getResourceId().toString(), equalTo("[-*]"));
|
||||
|
@ -485,7 +497,7 @@ public class IndexNameExpressionResolverTests extends ESTestCase {
|
|||
//ignore unavailable and allow no indices
|
||||
{
|
||||
IndicesOptions noExpandLenient = IndicesOptions.fromOptions(true, true, false, false, randomBoolean());
|
||||
IndexNameExpressionResolver.Context context = new IndexNameExpressionResolver.Context(state, noExpandLenient);
|
||||
IndexNameExpressionResolver.Context context = new IndexNameExpressionResolver.Context(state, noExpandLenient, false);
|
||||
String[] results = indexNameExpressionResolver.concreteIndexNames(context, "baz*");
|
||||
assertThat(results, emptyArray());
|
||||
|
||||
|
@ -507,7 +519,7 @@ public class IndexNameExpressionResolverTests extends ESTestCase {
|
|||
//ignore unavailable but don't allow no indices
|
||||
{
|
||||
IndicesOptions noExpandDisallowEmpty = IndicesOptions.fromOptions(true, false, false, false, randomBoolean());
|
||||
IndexNameExpressionResolver.Context context = new IndexNameExpressionResolver.Context(state, noExpandDisallowEmpty);
|
||||
IndexNameExpressionResolver.Context context = new IndexNameExpressionResolver.Context(state, noExpandDisallowEmpty, false);
|
||||
|
||||
{
|
||||
IndexNotFoundException infe = expectThrows(IndexNotFoundException.class,
|
||||
|
@ -532,7 +544,7 @@ public class IndexNameExpressionResolverTests extends ESTestCase {
|
|||
//error on unavailable but allow no indices
|
||||
{
|
||||
IndicesOptions noExpandErrorUnavailable = IndicesOptions.fromOptions(false, true, false, false, randomBoolean());
|
||||
IndexNameExpressionResolver.Context context = new IndexNameExpressionResolver.Context(state, noExpandErrorUnavailable);
|
||||
IndexNameExpressionResolver.Context context = new IndexNameExpressionResolver.Context(state, noExpandErrorUnavailable, false);
|
||||
{
|
||||
String[] results = indexNameExpressionResolver.concreteIndexNames(context, "baz*");
|
||||
assertThat(results, emptyArray());
|
||||
|
@ -558,7 +570,7 @@ public class IndexNameExpressionResolverTests extends ESTestCase {
|
|||
//error on both unavailable and no indices
|
||||
{
|
||||
IndicesOptions noExpandStrict = IndicesOptions.fromOptions(false, false, false, false, randomBoolean());
|
||||
IndexNameExpressionResolver.Context context = new IndexNameExpressionResolver.Context(state, noExpandStrict);
|
||||
IndexNameExpressionResolver.Context context = new IndexNameExpressionResolver.Context(state, noExpandStrict, false);
|
||||
IndexNotFoundException infe = expectThrows(IndexNotFoundException.class,
|
||||
() -> indexNameExpressionResolver.concreteIndexNames(context, "baz*"));
|
||||
assertThat(infe.getIndex().getName(), equalTo("baz*"));
|
||||
|
@ -585,7 +597,7 @@ public class IndexNameExpressionResolverTests extends ESTestCase {
|
|||
|
||||
{
|
||||
IndexNameExpressionResolver.Context context =
|
||||
new IndexNameExpressionResolver.Context(state, IndicesOptions.strictSingleIndexNoExpandForbidClosed());
|
||||
new IndexNameExpressionResolver.Context(state, IndicesOptions.strictSingleIndexNoExpandForbidClosed(), false);
|
||||
IndexNotFoundException infe = expectThrows(IndexNotFoundException.class,
|
||||
() -> indexNameExpressionResolver.concreteIndexNames(context, "baz*"));
|
||||
assertThat(infe.getIndex().getName(), equalTo("baz*"));
|
||||
|
@ -593,7 +605,7 @@ public class IndexNameExpressionResolverTests extends ESTestCase {
|
|||
|
||||
{
|
||||
IndexNameExpressionResolver.Context context =
|
||||
new IndexNameExpressionResolver.Context(state, IndicesOptions.strictSingleIndexNoExpandForbidClosed());
|
||||
new IndexNameExpressionResolver.Context(state, IndicesOptions.strictSingleIndexNoExpandForbidClosed(), false);
|
||||
IndexNotFoundException infe = expectThrows(IndexNotFoundException.class,
|
||||
() -> indexNameExpressionResolver.concreteIndexNames(context, "foo", "baz*"));
|
||||
assertThat(infe.getIndex().getName(), equalTo("baz*"));
|
||||
|
@ -601,7 +613,7 @@ public class IndexNameExpressionResolverTests extends ESTestCase {
|
|||
|
||||
{
|
||||
IndexNameExpressionResolver.Context context =
|
||||
new IndexNameExpressionResolver.Context(state, IndicesOptions.strictSingleIndexNoExpandForbidClosed());
|
||||
new IndexNameExpressionResolver.Context(state, IndicesOptions.strictSingleIndexNoExpandForbidClosed(), false);
|
||||
IllegalArgumentException e = expectThrows(IllegalArgumentException.class,
|
||||
() -> indexNameExpressionResolver.concreteIndexNames(context, "foofoobar"));
|
||||
assertThat(e.getMessage(), containsString("alias [foofoobar] has more than one index associated with it"));
|
||||
|
@ -609,7 +621,7 @@ public class IndexNameExpressionResolverTests extends ESTestCase {
|
|||
|
||||
{
|
||||
IndexNameExpressionResolver.Context context =
|
||||
new IndexNameExpressionResolver.Context(state, IndicesOptions.strictSingleIndexNoExpandForbidClosed());
|
||||
new IndexNameExpressionResolver.Context(state, IndicesOptions.strictSingleIndexNoExpandForbidClosed(), false);
|
||||
IllegalArgumentException e = expectThrows(IllegalArgumentException.class,
|
||||
() -> indexNameExpressionResolver.concreteIndexNames(context, "foo", "foofoobar"));
|
||||
assertThat(e.getMessage(), containsString("alias [foofoobar] has more than one index associated with it"));
|
||||
|
@ -617,7 +629,7 @@ public class IndexNameExpressionResolverTests extends ESTestCase {
|
|||
|
||||
{
|
||||
IndexNameExpressionResolver.Context context =
|
||||
new IndexNameExpressionResolver.Context(state, IndicesOptions.strictSingleIndexNoExpandForbidClosed());
|
||||
new IndexNameExpressionResolver.Context(state, IndicesOptions.strictSingleIndexNoExpandForbidClosed(), false);
|
||||
IndexClosedException ince = expectThrows(IndexClosedException.class,
|
||||
() -> indexNameExpressionResolver.concreteIndexNames(context, "foofoo-closed", "foofoobar"));
|
||||
assertThat(ince.getMessage(), equalTo("closed"));
|
||||
|
@ -625,7 +637,7 @@ public class IndexNameExpressionResolverTests extends ESTestCase {
|
|||
}
|
||||
|
||||
IndexNameExpressionResolver.Context context =
|
||||
new IndexNameExpressionResolver.Context(state, IndicesOptions.strictSingleIndexNoExpandForbidClosed());
|
||||
new IndexNameExpressionResolver.Context(state, IndicesOptions.strictSingleIndexNoExpandForbidClosed(), false);
|
||||
String[] results = indexNameExpressionResolver.concreteIndexNames(context, "foo", "barbaz");
|
||||
assertEquals(2, results.length);
|
||||
assertThat(results, arrayContainingInAnyOrder("foo", "foofoo"));
|
||||
|
@ -635,7 +647,7 @@ public class IndexNameExpressionResolverTests extends ESTestCase {
|
|||
ClusterState state = ClusterState.builder(new ClusterName("_name")).metadata(Metadata.builder().build()).build();
|
||||
|
||||
IndicesOptions options = IndicesOptions.strictExpandOpen();
|
||||
final IndexNameExpressionResolver.Context context = new IndexNameExpressionResolver.Context(state, options);
|
||||
final IndexNameExpressionResolver.Context context = new IndexNameExpressionResolver.Context(state, options, false);
|
||||
String[] results = indexNameExpressionResolver.concreteIndexNames(context, Strings.EMPTY_ARRAY);
|
||||
assertThat(results, emptyArray());
|
||||
|
||||
|
@ -656,7 +668,7 @@ public class IndexNameExpressionResolverTests extends ESTestCase {
|
|||
|
||||
|
||||
final IndexNameExpressionResolver.Context context2 =
|
||||
new IndexNameExpressionResolver.Context(state, IndicesOptions.lenientExpandOpen());
|
||||
new IndexNameExpressionResolver.Context(state, IndicesOptions.lenientExpandOpen(), false);
|
||||
results = indexNameExpressionResolver.concreteIndexNames(context2, Strings.EMPTY_ARRAY);
|
||||
assertThat(results, emptyArray());
|
||||
results = indexNameExpressionResolver.concreteIndexNames(context2, "foo");
|
||||
|
@ -667,7 +679,7 @@ public class IndexNameExpressionResolverTests extends ESTestCase {
|
|||
assertThat(results, emptyArray());
|
||||
|
||||
final IndexNameExpressionResolver.Context context3 =
|
||||
new IndexNameExpressionResolver.Context(state, IndicesOptions.fromOptions(true, false, true, false));
|
||||
new IndexNameExpressionResolver.Context(state, IndicesOptions.fromOptions(true, false, true, false), false);
|
||||
IndexNotFoundException infe = expectThrows(IndexNotFoundException.class,
|
||||
() -> indexNameExpressionResolver.concreteIndexNames(context3, Strings.EMPTY_ARRAY));
|
||||
assertThat(infe.getResourceId().toString(), equalTo("[_all]"));
|
||||
|
@ -692,7 +704,8 @@ public class IndexNameExpressionResolverTests extends ESTestCase {
|
|||
.put(indexBuilder("testXXX"))
|
||||
.put(indexBuilder("kuku"));
|
||||
ClusterState state = ClusterState.builder(new ClusterName("_name")).metadata(mdBuilder).build();
|
||||
IndexNameExpressionResolver.Context context = new IndexNameExpressionResolver.Context(state, IndicesOptions.strictExpandOpen());
|
||||
IndexNameExpressionResolver.Context context =
|
||||
new IndexNameExpressionResolver.Context(state, IndicesOptions.strictExpandOpen(), false);
|
||||
|
||||
IndexNotFoundException infe = expectThrows(IndexNotFoundException.class,
|
||||
() -> indexNameExpressionResolver.concreteIndexNames(context, "testZZZ"));
|
||||
|
@ -704,7 +717,8 @@ public class IndexNameExpressionResolverTests extends ESTestCase {
|
|||
.put(indexBuilder("testXXX"))
|
||||
.put(indexBuilder("kuku"));
|
||||
ClusterState state = ClusterState.builder(new ClusterName("_name")).metadata(mdBuilder).build();
|
||||
IndexNameExpressionResolver.Context context = new IndexNameExpressionResolver.Context(state, IndicesOptions.lenientExpandOpen());
|
||||
IndexNameExpressionResolver.Context context =
|
||||
new IndexNameExpressionResolver.Context(state, IndicesOptions.lenientExpandOpen(), false);
|
||||
|
||||
assertThat(newHashSet(indexNameExpressionResolver.concreteIndexNames(context, "testXXX", "testZZZ")),
|
||||
equalTo(newHashSet("testXXX")));
|
||||
|
@ -715,7 +729,8 @@ public class IndexNameExpressionResolverTests extends ESTestCase {
|
|||
.put(indexBuilder("testXXX"))
|
||||
.put(indexBuilder("kuku"));
|
||||
ClusterState state = ClusterState.builder(new ClusterName("_name")).metadata(mdBuilder).build();
|
||||
IndexNameExpressionResolver.Context context = new IndexNameExpressionResolver.Context(state, IndicesOptions.strictExpandOpen());
|
||||
IndexNameExpressionResolver.Context context =
|
||||
new IndexNameExpressionResolver.Context(state, IndicesOptions.strictExpandOpen(), false);
|
||||
|
||||
IndexNotFoundException infe = expectThrows(IndexNotFoundException.class,
|
||||
() -> indexNameExpressionResolver.concreteIndexNames(context, "testMo", "testMahdy"));
|
||||
|
@ -727,7 +742,8 @@ public class IndexNameExpressionResolverTests extends ESTestCase {
|
|||
.put(indexBuilder("testXXX"))
|
||||
.put(indexBuilder("kuku"));
|
||||
ClusterState state = ClusterState.builder(new ClusterName("_name")).metadata(mdBuilder).build();
|
||||
IndexNameExpressionResolver.Context context = new IndexNameExpressionResolver.Context(state, IndicesOptions.lenientExpandOpen());
|
||||
IndexNameExpressionResolver.Context context =
|
||||
new IndexNameExpressionResolver.Context(state, IndicesOptions.lenientExpandOpen(), false);
|
||||
assertThat(newHashSet(indexNameExpressionResolver.concreteIndexNames(context, new String[]{})),
|
||||
equalTo(newHashSet("kuku", "testXXX")));
|
||||
}
|
||||
|
@ -735,7 +751,7 @@ public class IndexNameExpressionResolverTests extends ESTestCase {
|
|||
Metadata.Builder mdBuilder = Metadata.builder();
|
||||
ClusterState state = ClusterState.builder(new ClusterName("_name")).metadata(mdBuilder).build();
|
||||
IndexNameExpressionResolver.Context context = new IndexNameExpressionResolver.Context(state,
|
||||
IndicesOptions.fromOptions(false, false, true, true));
|
||||
IndicesOptions.fromOptions(false, false, true, true), false);
|
||||
IndexNotFoundException infe = expectThrows(IndexNotFoundException.class,
|
||||
() -> indexNameExpressionResolver.concreteIndices(context, new String[]{}));
|
||||
assertThat(infe.getMessage(), is("no such index [null] and no indices exist"));
|
||||
|
@ -745,7 +761,7 @@ public class IndexNameExpressionResolverTests extends ESTestCase {
|
|||
Metadata.Builder mdBuilder = Metadata.builder();
|
||||
ClusterState state = ClusterState.builder(new ClusterName("_name")).metadata(mdBuilder).build();
|
||||
IndexNameExpressionResolver.Context context = new IndexNameExpressionResolver.Context(state,
|
||||
IndicesOptions.fromOptions(false, false, false, false));
|
||||
IndicesOptions.fromOptions(false, false, false, false), false);
|
||||
IndexNotFoundException infe = expectThrows(IndexNotFoundException.class,
|
||||
() -> indexNameExpressionResolver.concreteIndices(context, new String[]{}));
|
||||
assertThat(infe.getMessage(), is("no such index [_all] and no indices exist"));
|
||||
|
@ -761,16 +777,16 @@ public class IndexNameExpressionResolverTests extends ESTestCase {
|
|||
ClusterState state = ClusterState.builder(new ClusterName("_name")).metadata(mdBuilder).build();
|
||||
|
||||
IndexNameExpressionResolver.Context context =
|
||||
new IndexNameExpressionResolver.Context(state, IndicesOptions.fromOptions(true, true, false, false));
|
||||
new IndexNameExpressionResolver.Context(state, IndicesOptions.fromOptions(true, true, false, false), false);
|
||||
assertThat(newHashSet(indexNameExpressionResolver.concreteIndexNames(context, "testX*")),
|
||||
equalTo(new HashSet<String>()));
|
||||
context = new IndexNameExpressionResolver.Context(state, IndicesOptions.fromOptions(true, true, true, false));
|
||||
context = new IndexNameExpressionResolver.Context(state, IndicesOptions.fromOptions(true, true, true, false), false);
|
||||
assertThat(newHashSet(indexNameExpressionResolver.concreteIndexNames(context, "testX*")),
|
||||
equalTo(newHashSet("testXXX", "testXXY")));
|
||||
context = new IndexNameExpressionResolver.Context(state, IndicesOptions.fromOptions(true, true, false, true));
|
||||
context = new IndexNameExpressionResolver.Context(state, IndicesOptions.fromOptions(true, true, false, true), false);
|
||||
assertThat(newHashSet(indexNameExpressionResolver.concreteIndexNames(context, "testX*")),
|
||||
equalTo(newHashSet("testXYY")));
|
||||
context = new IndexNameExpressionResolver.Context(state, IndicesOptions.fromOptions(true, true, true, true));
|
||||
context = new IndexNameExpressionResolver.Context(state, IndicesOptions.fromOptions(true, true, true, true), false);
|
||||
assertThat(newHashSet(indexNameExpressionResolver.concreteIndexNames(context, "testX*")),
|
||||
equalTo(newHashSet("testXXX", "testXXY", "testXYY")));
|
||||
}
|
||||
|
@ -788,7 +804,7 @@ public class IndexNameExpressionResolverTests extends ESTestCase {
|
|||
ClusterState state = ClusterState.builder(new ClusterName("_name")).metadata(mdBuilder).build();
|
||||
|
||||
IndexNameExpressionResolver.Context context = new IndexNameExpressionResolver.Context(state,
|
||||
IndicesOptions.fromOptions(true, true, true, true));
|
||||
IndicesOptions.fromOptions(true, true, true, true), false);
|
||||
assertThat(newHashSet(indexNameExpressionResolver.concreteIndexNames(context, "testX*")),
|
||||
equalTo(newHashSet("testXXX", "testXXY", "testXYY")));
|
||||
|
||||
|
@ -1076,7 +1092,7 @@ public class IndexNameExpressionResolverTests extends ESTestCase {
|
|||
|
||||
{
|
||||
ClusterState state = ClusterState.builder(new ClusterName("_name")).metadata(Metadata.builder().build()).build();
|
||||
IndexNameExpressionResolver.Context context = new IndexNameExpressionResolver.Context(state, indicesOptions);
|
||||
IndexNameExpressionResolver.Context context = new IndexNameExpressionResolver.Context(state, indicesOptions, false);
|
||||
|
||||
// with no indices, asking for all indices should return empty list or exception, depending on indices options
|
||||
if (indicesOptions.allowNoIndices()) {
|
||||
|
@ -1095,7 +1111,7 @@ public class IndexNameExpressionResolverTests extends ESTestCase {
|
|||
.put(indexBuilder("bbb").state(State.OPEN).putAlias(AliasMetadata.builder("bbb_alias1")))
|
||||
.put(indexBuilder("ccc").state(State.CLOSE).putAlias(AliasMetadata.builder("ccc_alias1")));
|
||||
ClusterState state = ClusterState.builder(new ClusterName("_name")).metadata(mdBuilder).build();
|
||||
IndexNameExpressionResolver.Context context = new IndexNameExpressionResolver.Context(state, indicesOptions);
|
||||
IndexNameExpressionResolver.Context context = new IndexNameExpressionResolver.Context(state, indicesOptions, false);
|
||||
if (indicesOptions.expandWildcardsOpen() || indicesOptions.expandWildcardsClosed() || indicesOptions.allowNoIndices()) {
|
||||
String[] concreteIndices = indexNameExpressionResolver.concreteIndexNames(context, allIndices);
|
||||
assertThat(concreteIndices, notNullValue());
|
||||
|
@ -1125,7 +1141,7 @@ public class IndexNameExpressionResolverTests extends ESTestCase {
|
|||
.put(indexBuilder("bbb").state(State.OPEN).putAlias(AliasMetadata.builder("bbb_alias1")))
|
||||
.put(indexBuilder("ccc").state(State.CLOSE).putAlias(AliasMetadata.builder("ccc_alias1")));
|
||||
ClusterState state = ClusterState.builder(new ClusterName("_name")).metadata(mdBuilder).build();
|
||||
IndexNameExpressionResolver.Context context = new IndexNameExpressionResolver.Context(state, indicesOptions);
|
||||
IndexNameExpressionResolver.Context context = new IndexNameExpressionResolver.Context(state, indicesOptions, false);
|
||||
|
||||
// asking for non existing wildcard pattern should return empty list or exception
|
||||
if (indicesOptions.allowNoIndices()) {
|
||||
|
@ -1254,20 +1270,20 @@ public class IndexNameExpressionResolverTests extends ESTestCase {
|
|||
ClusterState state = ClusterState.builder(new ClusterName("_name")).metadata(mdBuilder).build();
|
||||
|
||||
IndexNameExpressionResolver.Context contextICE =
|
||||
new IndexNameExpressionResolver.Context(state, IndicesOptions.strictExpandOpenAndForbidClosed());
|
||||
new IndexNameExpressionResolver.Context(state, IndicesOptions.strictExpandOpenAndForbidClosed(), false);
|
||||
expectThrows(IndexClosedException.class, () -> indexNameExpressionResolver.concreteIndexNames(contextICE, "foo1-closed"));
|
||||
expectThrows(IndexClosedException.class, () -> indexNameExpressionResolver.concreteIndexNames(contextICE, "foobar1-closed"));
|
||||
|
||||
IndexNameExpressionResolver.Context context = new IndexNameExpressionResolver.Context(state, IndicesOptions.fromOptions(true,
|
||||
contextICE.getOptions().allowNoIndices(), contextICE.getOptions().expandWildcardsOpen(),
|
||||
contextICE.getOptions().expandWildcardsClosed(), contextICE.getOptions()));
|
||||
contextICE.getOptions().expandWildcardsClosed(), contextICE.getOptions()), false);
|
||||
String[] results = indexNameExpressionResolver.concreteIndexNames(context, "foo1-closed");
|
||||
assertThat(results, emptyArray());
|
||||
|
||||
results = indexNameExpressionResolver.concreteIndexNames(context, "foobar1-closed");
|
||||
assertThat(results, emptyArray());
|
||||
|
||||
context = new IndexNameExpressionResolver.Context(state, IndicesOptions.lenientExpandOpen());
|
||||
context = new IndexNameExpressionResolver.Context(state, IndicesOptions.lenientExpandOpen(), false);
|
||||
results = indexNameExpressionResolver.concreteIndexNames(context, "foo1-closed");
|
||||
assertThat(results, arrayWithSize(1));
|
||||
assertThat(results, arrayContaining("foo1-closed"));
|
||||
|
@ -1277,7 +1293,7 @@ public class IndexNameExpressionResolverTests extends ESTestCase {
|
|||
assertThat(results, arrayContaining("foo1-closed"));
|
||||
|
||||
// testing an alias pointing to three indices:
|
||||
context = new IndexNameExpressionResolver.Context(state, IndicesOptions.strictExpandOpenAndForbidClosed());
|
||||
context = new IndexNameExpressionResolver.Context(state, IndicesOptions.strictExpandOpenAndForbidClosed(), false);
|
||||
try {
|
||||
indexNameExpressionResolver.concreteIndexNames(context, "foobar2-closed");
|
||||
fail("foo2-closed should be closed, but it is open");
|
||||
|
@ -1287,12 +1303,12 @@ public class IndexNameExpressionResolverTests extends ESTestCase {
|
|||
|
||||
context = new IndexNameExpressionResolver.Context(state, IndicesOptions.fromOptions(true,
|
||||
context.getOptions().allowNoIndices(), context.getOptions().expandWildcardsOpen(),
|
||||
context.getOptions().expandWildcardsClosed(), context.getOptions()));
|
||||
context.getOptions().expandWildcardsClosed(), context.getOptions()), false);
|
||||
results = indexNameExpressionResolver.concreteIndexNames(context, "foobar2-closed");
|
||||
assertThat(results, arrayWithSize(1));
|
||||
assertThat(results, arrayContaining("foo3"));
|
||||
|
||||
context = new IndexNameExpressionResolver.Context(state, IndicesOptions.lenientExpandOpen());
|
||||
context = new IndexNameExpressionResolver.Context(state, IndicesOptions.lenientExpandOpen(), false);
|
||||
results = indexNameExpressionResolver.concreteIndexNames(context, "foobar2-closed");
|
||||
assertThat(results, arrayWithSize(3));
|
||||
assertThat(results, arrayContainingInAnyOrder("foo1-closed", "foo2-closed", "foo3"));
|
||||
|
@ -1305,7 +1321,7 @@ public class IndexNameExpressionResolverTests extends ESTestCase {
|
|||
IndicesOptions[] indicesOptions = new IndicesOptions[]{ IndicesOptions.strictExpandOpen(), IndicesOptions.strictExpand(),
|
||||
IndicesOptions.lenientExpandOpen(), IndicesOptions.strictExpandOpenAndForbidClosed()};
|
||||
for (IndicesOptions options : indicesOptions) {
|
||||
IndexNameExpressionResolver.Context context = new IndexNameExpressionResolver.Context(state, options);
|
||||
IndexNameExpressionResolver.Context context = new IndexNameExpressionResolver.Context(state, options, false);
|
||||
String[] results = indexNameExpressionResolver.concreteIndexNames(context, "index1", "index1", "alias1");
|
||||
assertThat(results, equalTo(new String[]{"index1"}));
|
||||
}
|
||||
|
@ -1325,11 +1341,12 @@ public class IndexNameExpressionResolverTests extends ESTestCase {
|
|||
.put(indexBuilder("test-1").state(IndexMetadata.State.CLOSE).putAlias(AliasMetadata.builder("alias-1")));
|
||||
ClusterState state = ClusterState.builder(new ClusterName("_name")).metadata(mdBuilder).build();
|
||||
|
||||
IndexNameExpressionResolver.Context context = new IndexNameExpressionResolver.Context(state, IndicesOptions.lenientExpandOpen());
|
||||
IndexNameExpressionResolver.Context context =
|
||||
new IndexNameExpressionResolver.Context(state, IndicesOptions.lenientExpandOpen(), false);
|
||||
String[] strings = indexNameExpressionResolver.concreteIndexNames(context, "alias-*");
|
||||
assertArrayEquals(new String[] {"test-0"}, strings);
|
||||
|
||||
context = new IndexNameExpressionResolver.Context(state, IndicesOptions.strictExpandOpen());
|
||||
context = new IndexNameExpressionResolver.Context(state, IndicesOptions.strictExpandOpen(), false);
|
||||
strings = indexNameExpressionResolver.concreteIndexNames(context, "alias-*");
|
||||
|
||||
assertArrayEquals(new String[] {"test-0"}, strings);
|
||||
|
@ -1740,7 +1757,8 @@ public class IndexNameExpressionResolverTests extends ESTestCase {
|
|||
public void testInvalidIndex() {
|
||||
Metadata.Builder mdBuilder = Metadata.builder().put(indexBuilder("test"));
|
||||
ClusterState state = ClusterState.builder(new ClusterName("_name")).metadata(mdBuilder).build();
|
||||
IndexNameExpressionResolver.Context context = new IndexNameExpressionResolver.Context(state, IndicesOptions.lenientExpandOpen());
|
||||
IndexNameExpressionResolver.Context context =
|
||||
new IndexNameExpressionResolver.Context(state, IndicesOptions.lenientExpandOpen(), false);
|
||||
|
||||
InvalidIndexNameException iine = expectThrows(InvalidIndexNameException.class,
|
||||
() -> indexNameExpressionResolver.concreteIndexNames(context, "_foo"));
|
||||
|
@ -1811,6 +1829,86 @@ public class IndexNameExpressionResolverTests extends ESTestCase {
|
|||
}
|
||||
}
|
||||
|
||||
public void testFullWildcardSystemIndexResolutionAllowed() {
|
||||
ClusterState state = systemIndexTestClusterState();
|
||||
SearchRequest request = new SearchRequest(randomFrom("*", "_all"));
|
||||
|
||||
List<String> indexNames = resolveConcreteIndexNameList(state, request);
|
||||
assertThat(indexNames, containsInAnyOrder("some-other-index", ".ml-stuff", ".ml-meta", ".watches"));
|
||||
}
|
||||
|
||||
public void testWildcardSystemIndexResolutionMultipleMatchesAllowed() {
|
||||
ClusterState state = systemIndexTestClusterState();
|
||||
SearchRequest request = new SearchRequest(".w*");
|
||||
|
||||
List<String> indexNames = resolveConcreteIndexNameList(state, request);
|
||||
assertThat(indexNames, containsInAnyOrder(".watches"));
|
||||
}
|
||||
|
||||
public void testWildcardSystemIndexResolutionSingleMatchAllowed() {
|
||||
ClusterState state = systemIndexTestClusterState();
|
||||
SearchRequest request = new SearchRequest(".ml-*");
|
||||
|
||||
List<String> indexNames = resolveConcreteIndexNameList(state, request);
|
||||
assertThat(indexNames, containsInAnyOrder(".ml-meta", ".ml-stuff"));
|
||||
}
|
||||
|
||||
public void testSingleSystemIndexResolutionAllowed() {
|
||||
ClusterState state = systemIndexTestClusterState();
|
||||
SearchRequest request = new SearchRequest(".ml-meta");
|
||||
|
||||
List<String> indexNames = resolveConcreteIndexNameList(state, request);
|
||||
assertThat(indexNames, containsInAnyOrder(".ml-meta"));
|
||||
}
|
||||
|
||||
public void testFullWildcardSystemIndexResolutionDeprecated() {
|
||||
threadContext.putHeader(SYSTEM_INDEX_ACCESS_CONTROL_HEADER_KEY, Boolean.FALSE.toString());
|
||||
ClusterState state = systemIndexTestClusterState();
|
||||
SearchRequest request = new SearchRequest(randomFrom("*", "_all"));
|
||||
|
||||
List<String> indexNames = resolveConcreteIndexNameList(state, request);
|
||||
assertThat(indexNames, containsInAnyOrder("some-other-index", ".ml-stuff", ".ml-meta", ".watches"));
|
||||
assertWarnings("this request accesses system indices: [.ml-meta, .ml-stuff, .watches], but in a future major version, " +
|
||||
"direct access to system indices will be prevented by default");
|
||||
|
||||
}
|
||||
|
||||
public void testSingleSystemIndexResolutionDeprecated() {
|
||||
threadContext.putHeader(SYSTEM_INDEX_ACCESS_CONTROL_HEADER_KEY, Boolean.FALSE.toString());
|
||||
ClusterState state = systemIndexTestClusterState();
|
||||
SearchRequest request = new SearchRequest(".ml-meta");
|
||||
|
||||
List<String> indexNames = resolveConcreteIndexNameList(state, request);
|
||||
assertThat(indexNames, containsInAnyOrder(".ml-meta"));
|
||||
assertWarnings("this request accesses system indices: [.ml-meta], but in a future major version, direct access " +
|
||||
"to system indices will be prevented by default");
|
||||
|
||||
}
|
||||
|
||||
public void testWildcardSystemIndexReslutionSingleMatchDeprecated() {
|
||||
threadContext.putHeader(SYSTEM_INDEX_ACCESS_CONTROL_HEADER_KEY, Boolean.FALSE.toString());
|
||||
ClusterState state = systemIndexTestClusterState();
|
||||
SearchRequest request = new SearchRequest(".w*");
|
||||
|
||||
List<String> indexNames = resolveConcreteIndexNameList(state, request);
|
||||
assertThat(indexNames, containsInAnyOrder(".watches"));
|
||||
assertWarnings("this request accesses system indices: [.watches], but in a future major version, direct access " +
|
||||
"to system indices will be prevented by default");
|
||||
|
||||
}
|
||||
|
||||
public void testWildcardSystemIndexResolutionMultipleMatchesDeprecated() {
|
||||
threadContext.putHeader(SYSTEM_INDEX_ACCESS_CONTROL_HEADER_KEY, Boolean.FALSE.toString());
|
||||
ClusterState state = systemIndexTestClusterState();
|
||||
SearchRequest request = new SearchRequest(".ml-*");
|
||||
|
||||
List<String> indexNames = resolveConcreteIndexNameList(state, request);
|
||||
assertThat(indexNames, containsInAnyOrder(".ml-meta", ".ml-stuff"));
|
||||
assertWarnings("this request accesses system indices: [.ml-meta, .ml-stuff], but in a future major version, direct access " +
|
||||
"to system indices will be prevented by default");
|
||||
|
||||
}
|
||||
|
||||
public void testDataStreams() {
|
||||
final String dataStreamName = "my-data-stream";
|
||||
IndexMetadata index1 = createBackingIndex(dataStreamName, 1).build();
|
||||
|
@ -2049,4 +2147,21 @@ public class IndexNameExpressionResolverTests extends ESTestCase {
|
|||
names = indexNameExpressionResolver.dataStreamNames(state, IndicesOptions.lenientExpand(), "*", "-*");
|
||||
assertThat(names, empty());
|
||||
}
|
||||
|
||||
private ClusterState systemIndexTestClusterState() {
|
||||
Settings settings = Settings.builder().build();
|
||||
Metadata.Builder mdBuilder = Metadata.builder()
|
||||
.put(indexBuilder(".ml-meta", settings).state(State.OPEN).system(true))
|
||||
.put(indexBuilder(".watches", settings).state(State.OPEN).system(true))
|
||||
.put(indexBuilder(".ml-stuff", settings).state(State.OPEN).system(true))
|
||||
.put(indexBuilder("some-other-index").state(State.OPEN));
|
||||
return ClusterState.builder(new ClusterName("_name")).metadata(mdBuilder).build();
|
||||
}
|
||||
|
||||
private List<String> resolveConcreteIndexNameList(ClusterState state, SearchRequest request) {
|
||||
return Arrays
|
||||
.stream(indexNameExpressionResolver.concreteIndices(state, request))
|
||||
.map(i -> i.getName())
|
||||
.collect(Collectors.toList());
|
||||
}
|
||||
}
|
||||
|
|
|
@ -48,7 +48,8 @@ public class WildcardExpressionResolverTests extends ESTestCase {
|
|||
ClusterState state = ClusterState.builder(new ClusterName("_name")).metadata(mdBuilder).build();
|
||||
IndexNameExpressionResolver.WildcardExpressionResolver resolver = new IndexNameExpressionResolver.WildcardExpressionResolver();
|
||||
|
||||
IndexNameExpressionResolver.Context context = new IndexNameExpressionResolver.Context(state, IndicesOptions.lenientExpandOpen());
|
||||
IndexNameExpressionResolver.Context context =
|
||||
new IndexNameExpressionResolver.Context(state, IndicesOptions.lenientExpandOpen(), false);
|
||||
assertThat(newHashSet(resolver.resolve(context, Collections.singletonList("testXXX"))), equalTo(newHashSet("testXXX")));
|
||||
assertThat(newHashSet(resolver.resolve(context, Arrays.asList("testXXX", "testYYY"))), equalTo(newHashSet("testXXX", "testYYY")));
|
||||
assertThat(newHashSet(resolver.resolve(context, Arrays.asList("testXXX", "ku*"))), equalTo(newHashSet("testXXX", "kuku")));
|
||||
|
@ -76,7 +77,8 @@ public class WildcardExpressionResolverTests extends ESTestCase {
|
|||
ClusterState state = ClusterState.builder(new ClusterName("_name")).metadata(mdBuilder).build();
|
||||
IndexNameExpressionResolver.WildcardExpressionResolver resolver = new IndexNameExpressionResolver.WildcardExpressionResolver();
|
||||
|
||||
IndexNameExpressionResolver.Context context = new IndexNameExpressionResolver.Context(state, IndicesOptions.lenientExpandOpen());
|
||||
IndexNameExpressionResolver.Context context =
|
||||
new IndexNameExpressionResolver.Context(state, IndicesOptions.lenientExpandOpen(), false);
|
||||
assertThat(newHashSet(resolver.resolve(context, Arrays.asList("testYY*", "alias*"))),
|
||||
equalTo(newHashSet("testXXX", "testXYY", "testYYY")));
|
||||
assertThat(newHashSet(resolver.resolve(context, Collections.singletonList("-kuku"))), equalTo(newHashSet("-kuku")));
|
||||
|
@ -99,12 +101,12 @@ public class WildcardExpressionResolverTests extends ESTestCase {
|
|||
IndexNameExpressionResolver.WildcardExpressionResolver resolver = new IndexNameExpressionResolver.WildcardExpressionResolver();
|
||||
|
||||
IndexNameExpressionResolver.Context context = new IndexNameExpressionResolver.Context(state,
|
||||
IndicesOptions.fromOptions(true, true, true, true));
|
||||
IndicesOptions.fromOptions(true, true, true, true), false);
|
||||
assertThat(newHashSet(resolver.resolve(context, Collections.singletonList("testX*"))),
|
||||
equalTo(newHashSet("testXXX", "testXXY", "testXYY")));
|
||||
context = new IndexNameExpressionResolver.Context(state, IndicesOptions.fromOptions(true, true, false, true));
|
||||
context = new IndexNameExpressionResolver.Context(state, IndicesOptions.fromOptions(true, true, false, true), false);
|
||||
assertThat(newHashSet(resolver.resolve(context, Collections.singletonList("testX*"))), equalTo(newHashSet("testXYY")));
|
||||
context = new IndexNameExpressionResolver.Context(state, IndicesOptions.fromOptions(true, true, true, false));
|
||||
context = new IndexNameExpressionResolver.Context(state, IndicesOptions.fromOptions(true, true, true, false), false);
|
||||
assertThat(newHashSet(resolver.resolve(context, Collections.singletonList("testX*"))), equalTo(newHashSet("testXXX", "testXXY")));
|
||||
}
|
||||
|
||||
|
@ -121,7 +123,8 @@ public class WildcardExpressionResolverTests extends ESTestCase {
|
|||
ClusterState state = ClusterState.builder(new ClusterName("_name")).metadata(mdBuilder).build();
|
||||
IndexNameExpressionResolver.WildcardExpressionResolver resolver = new IndexNameExpressionResolver.WildcardExpressionResolver();
|
||||
|
||||
IndexNameExpressionResolver.Context context = new IndexNameExpressionResolver.Context(state, IndicesOptions.lenientExpandOpen());
|
||||
IndexNameExpressionResolver.Context context =
|
||||
new IndexNameExpressionResolver.Context(state, IndicesOptions.lenientExpandOpen(), false);
|
||||
assertThat(newHashSet(resolver.resolve(context, Collections.singletonList("test*X*"))),
|
||||
equalTo(newHashSet("testXXX", "testXXY", "testXYY")));
|
||||
assertThat(newHashSet(resolver.resolve(context, Collections.singletonList("test*X*Y"))), equalTo(newHashSet("testXXY", "testXYY")));
|
||||
|
@ -140,7 +143,8 @@ public class WildcardExpressionResolverTests extends ESTestCase {
|
|||
ClusterState state = ClusterState.builder(new ClusterName("_name")).metadata(mdBuilder).build();
|
||||
IndexNameExpressionResolver.WildcardExpressionResolver resolver = new IndexNameExpressionResolver.WildcardExpressionResolver();
|
||||
|
||||
IndexNameExpressionResolver.Context context = new IndexNameExpressionResolver.Context(state, IndicesOptions.lenientExpandOpen());
|
||||
IndexNameExpressionResolver.Context context =
|
||||
new IndexNameExpressionResolver.Context(state, IndicesOptions.lenientExpandOpen(), false);
|
||||
assertThat(newHashSet(resolver.resolve(context, Collections.singletonList("_all"))),
|
||||
equalTo(newHashSet("testXXX", "testXYY", "testYYY")));
|
||||
}
|
||||
|
@ -158,15 +162,15 @@ public class WildcardExpressionResolverTests extends ESTestCase {
|
|||
IndicesOptions indicesAndAliasesOptions = IndicesOptions.fromOptions(randomBoolean(), randomBoolean(), true, false, true, false,
|
||||
false, false);
|
||||
IndexNameExpressionResolver.Context indicesAndAliasesContext =
|
||||
new IndexNameExpressionResolver.Context(state, indicesAndAliasesOptions);
|
||||
new IndexNameExpressionResolver.Context(state, indicesAndAliasesOptions, false);
|
||||
// ignoreAliases option is set, WildcardExpressionResolver throws error when
|
||||
IndicesOptions skipAliasesIndicesOptions = IndicesOptions.fromOptions(true, true, true, false, true, false, true, false);
|
||||
IndexNameExpressionResolver.Context skipAliasesLenientContext =
|
||||
new IndexNameExpressionResolver.Context(state, skipAliasesIndicesOptions);
|
||||
new IndexNameExpressionResolver.Context(state, skipAliasesIndicesOptions, false);
|
||||
// ignoreAliases option is set, WildcardExpressionResolver resolves the provided expressions only against the defined indices
|
||||
IndicesOptions errorOnAliasIndicesOptions = IndicesOptions.fromOptions(false, false, true, false, true, false, true, false);
|
||||
IndexNameExpressionResolver.Context skipAliasesStrictContext =
|
||||
new IndexNameExpressionResolver.Context(state, errorOnAliasIndicesOptions);
|
||||
new IndexNameExpressionResolver.Context(state, errorOnAliasIndicesOptions, false);
|
||||
|
||||
{
|
||||
List<String> indices = resolver.resolve(indicesAndAliasesContext, Collections.singletonList("foo_a*"));
|
||||
|
@ -232,7 +236,7 @@ public class WildcardExpressionResolverTests extends ESTestCase {
|
|||
IndicesOptions indicesAndAliasesOptions = IndicesOptions.fromOptions(randomBoolean(), randomBoolean(), true, false, true, false,
|
||||
false, false);
|
||||
IndexNameExpressionResolver.Context indicesAndAliasesContext =
|
||||
new IndexNameExpressionResolver.Context(state, indicesAndAliasesOptions);
|
||||
new IndexNameExpressionResolver.Context(state, indicesAndAliasesOptions, false);
|
||||
|
||||
// data streams are not included but expression matches the data stream
|
||||
List<String> indices = resolver.resolve(indicesAndAliasesContext, Collections.singletonList("foo_*"));
|
||||
|
@ -247,7 +251,7 @@ public class WildcardExpressionResolverTests extends ESTestCase {
|
|||
IndicesOptions indicesAndAliasesOptions = IndicesOptions.fromOptions(randomBoolean(), randomBoolean(), true, false, true, false,
|
||||
false, false);
|
||||
IndexNameExpressionResolver.Context indicesAliasesAndDataStreamsContext = new IndexNameExpressionResolver.Context(state,
|
||||
indicesAndAliasesOptions, false, false, true);
|
||||
indicesAndAliasesOptions, false, false, true, false);
|
||||
|
||||
// data stream's corresponding backing indices are resolved
|
||||
List<String> indices = resolver.resolve(indicesAliasesAndDataStreamsContext, Collections.singletonList("foo_*"));
|
||||
|
@ -264,7 +268,7 @@ public class WildcardExpressionResolverTests extends ESTestCase {
|
|||
IndicesOptions indicesAliasesAndExpandHiddenOptions = IndicesOptions.fromOptions(randomBoolean(), randomBoolean(), true, false,
|
||||
true, true, false, false, false);
|
||||
IndexNameExpressionResolver.Context indicesAliasesDataStreamsAndHiddenIndices = new IndexNameExpressionResolver.Context(state,
|
||||
indicesAliasesAndExpandHiddenOptions, false, false, true);
|
||||
indicesAliasesAndExpandHiddenOptions, false, false, true, false);
|
||||
|
||||
// data stream's corresponding backing indices are resolved
|
||||
List<String> indices = resolver.resolve(indicesAliasesDataStreamsAndHiddenIndices, Collections.singletonList("foo_*"));
|
||||
|
@ -290,12 +294,12 @@ public class WildcardExpressionResolverTests extends ESTestCase {
|
|||
// expressions against the defined indices and aliases
|
||||
IndicesOptions indicesAndAliasesOptions = IndicesOptions.fromOptions(false, false, true, false, true, false, false, false);
|
||||
IndexNameExpressionResolver.Context indicesAndAliasesContext =
|
||||
new IndexNameExpressionResolver.Context(state, indicesAndAliasesOptions);
|
||||
new IndexNameExpressionResolver.Context(state, indicesAndAliasesOptions, false);
|
||||
|
||||
// ignoreAliases option is set, WildcardExpressionResolver resolves the provided expressions
|
||||
// only against the defined indices
|
||||
IndicesOptions onlyIndicesOptions = IndicesOptions.fromOptions(false, false, true, false, true, false, true, false);
|
||||
IndexNameExpressionResolver.Context onlyIndicesContext = new IndexNameExpressionResolver.Context(state, onlyIndicesOptions);
|
||||
IndexNameExpressionResolver.Context onlyIndicesContext = new IndexNameExpressionResolver.Context(state, onlyIndicesOptions, false);
|
||||
|
||||
{
|
||||
Set<String> matches = IndexNameExpressionResolver.WildcardExpressionResolver.matches(indicesAndAliasesContext,
|
||||
|
|
|
@ -48,6 +48,7 @@ import org.elasticsearch.common.settings.Settings;
|
|||
import org.elasticsearch.common.util.BigArrays;
|
||||
import org.elasticsearch.common.util.PageCacheRecycler;
|
||||
import org.elasticsearch.common.util.concurrent.EsRejectedExecutionException;
|
||||
import org.elasticsearch.common.util.concurrent.ThreadContext;
|
||||
import org.elasticsearch.core.internal.io.IOUtils;
|
||||
import org.elasticsearch.env.Environment;
|
||||
import org.elasticsearch.env.NodeEnvironment;
|
||||
|
@ -179,7 +180,7 @@ public class IndexModuleTests extends ESTestCase {
|
|||
engineFactory,
|
||||
Collections.emptyMap(),
|
||||
() -> true,
|
||||
new IndexNameExpressionResolver(),
|
||||
new IndexNameExpressionResolver(new ThreadContext(Settings.EMPTY)),
|
||||
Collections.emptyMap());
|
||||
module.setReaderWrapper(s -> new Wrapper());
|
||||
|
||||
|
@ -201,7 +202,7 @@ public class IndexModuleTests extends ESTestCase {
|
|||
final Map<String, IndexStorePlugin.DirectoryFactory> indexStoreFactories = singletonMap(
|
||||
"foo_store", new FooFunction());
|
||||
final IndexModule module = new IndexModule(indexSettings, emptyAnalysisRegistry, new InternalEngineFactory(), indexStoreFactories,
|
||||
() -> true, new IndexNameExpressionResolver(), Collections.emptyMap());
|
||||
() -> true, new IndexNameExpressionResolver(new ThreadContext(Settings.EMPTY)), Collections.emptyMap());
|
||||
|
||||
final IndexService indexService = newIndexService(module);
|
||||
assertThat(indexService.getDirectoryFactory(), instanceOf(FooFunction.class));
|
||||
|
@ -514,7 +515,7 @@ public class IndexModuleTests extends ESTestCase {
|
|||
new InternalEngineFactory(),
|
||||
Collections.emptyMap(),
|
||||
() -> true,
|
||||
new IndexNameExpressionResolver(),
|
||||
new IndexNameExpressionResolver(new ThreadContext(Settings.EMPTY)),
|
||||
recoveryStateFactories);
|
||||
|
||||
final IndexService indexService = newIndexService(module);
|
||||
|
@ -535,7 +536,7 @@ public class IndexModuleTests extends ESTestCase {
|
|||
|
||||
private static IndexModule createIndexModule(IndexSettings indexSettings, AnalysisRegistry emptyAnalysisRegistry) {
|
||||
return new IndexModule(indexSettings, emptyAnalysisRegistry, new InternalEngineFactory(), Collections.emptyMap(), () -> true,
|
||||
new IndexNameExpressionResolver(), Collections.emptyMap());
|
||||
new IndexNameExpressionResolver(new ThreadContext(Settings.EMPTY)), Collections.emptyMap());
|
||||
}
|
||||
|
||||
class CustomQueryCache implements QueryCache {
|
||||
|
|
|
@ -28,6 +28,7 @@ import org.elasticsearch.cluster.metadata.IndexNameExpressionResolver;
|
|||
import org.elasticsearch.cluster.metadata.Metadata;
|
||||
import org.elasticsearch.cluster.service.ClusterService;
|
||||
import org.elasticsearch.common.settings.Settings;
|
||||
import org.elasticsearch.common.util.concurrent.ThreadContext;
|
||||
import org.elasticsearch.test.ESTestCase;
|
||||
import org.junit.Before;
|
||||
|
||||
|
@ -49,8 +50,10 @@ public class SearchIndexNameMatcherTests extends ESTestCase {
|
|||
ClusterService clusterService = mock(ClusterService.class);
|
||||
when(clusterService.state()).thenReturn(state);
|
||||
|
||||
matcher = new SearchIndexNameMatcher("index1", "", clusterService, new IndexNameExpressionResolver());
|
||||
remoteMatcher = new SearchIndexNameMatcher("index1", "cluster", clusterService, new IndexNameExpressionResolver());
|
||||
matcher = new SearchIndexNameMatcher("index1", "", clusterService,
|
||||
new IndexNameExpressionResolver(new ThreadContext(Settings.EMPTY)));
|
||||
remoteMatcher = new SearchIndexNameMatcher("index1", "cluster", clusterService,
|
||||
new IndexNameExpressionResolver(new ThreadContext(Settings.EMPTY)));
|
||||
}
|
||||
|
||||
private static IndexMetadata.Builder indexBuilder(String index) {
|
||||
|
|
|
@ -81,6 +81,7 @@ import org.elasticsearch.common.UUIDs;
|
|||
import org.elasticsearch.common.settings.ClusterSettings;
|
||||
import org.elasticsearch.common.settings.IndexScopedSettings;
|
||||
import org.elasticsearch.common.settings.Settings;
|
||||
import org.elasticsearch.common.util.concurrent.ThreadContext;
|
||||
import org.elasticsearch.common.xcontent.NamedXContentRegistry;
|
||||
import org.elasticsearch.env.Environment;
|
||||
import org.elasticsearch.env.TestEnvironment;
|
||||
|
@ -153,7 +154,7 @@ public class ClusterStateChanges {
|
|||
shardStartedClusterStateTaskExecutor
|
||||
= new ShardStateAction.ShardStartedClusterStateTaskExecutor(allocationService, null, () -> Priority.NORMAL, logger);
|
||||
ActionFilters actionFilters = new ActionFilters(Collections.emptySet());
|
||||
IndexNameExpressionResolver indexNameExpressionResolver = new IndexNameExpressionResolver();
|
||||
IndexNameExpressionResolver indexNameExpressionResolver = new IndexNameExpressionResolver(new ThreadContext(Settings.EMPTY));
|
||||
DestructiveOperations destructiveOperations = new DestructiveOperations(SETTINGS, clusterSettings);
|
||||
Environment environment = TestEnvironment.newEnvironment(SETTINGS);
|
||||
Transport transport = mock(Transport.class); // it's not used
|
||||
|
|
|
@ -22,6 +22,7 @@ package org.elasticsearch.rest;
|
|||
import org.elasticsearch.client.node.NodeClient;
|
||||
import org.elasticsearch.common.Table;
|
||||
import org.elasticsearch.common.bytes.BytesArray;
|
||||
import org.elasticsearch.common.settings.Settings;
|
||||
import org.elasticsearch.common.xcontent.XContentBuilder;
|
||||
import org.elasticsearch.common.xcontent.XContentType;
|
||||
import org.elasticsearch.common.xcontent.json.JsonXContent;
|
||||
|
@ -29,6 +30,8 @@ import org.elasticsearch.rest.action.cat.AbstractCatAction;
|
|||
import org.elasticsearch.test.ESTestCase;
|
||||
import org.elasticsearch.test.rest.FakeRestChannel;
|
||||
import org.elasticsearch.test.rest.FakeRestRequest;
|
||||
import org.elasticsearch.threadpool.TestThreadPool;
|
||||
import org.elasticsearch.threadpool.ThreadPool;
|
||||
|
||||
import java.io.IOException;
|
||||
import java.util.Collections;
|
||||
|
@ -39,9 +42,24 @@ import java.util.concurrent.atomic.AtomicBoolean;
|
|||
|
||||
import static org.hamcrest.core.StringContains.containsString;
|
||||
import static org.hamcrest.object.HasToString.hasToString;
|
||||
import static org.mockito.Mockito.mock;
|
||||
|
||||
public class BaseRestHandlerTests extends ESTestCase {
|
||||
private NodeClient mockClient;
|
||||
private ThreadPool threadPool;
|
||||
|
||||
@Override
|
||||
public void setUp() throws Exception {
|
||||
super.setUp();
|
||||
threadPool = new TestThreadPool(this.getClass().getSimpleName() + "ThreadPool");
|
||||
mockClient = new NodeClient(Settings.EMPTY, threadPool);
|
||||
}
|
||||
|
||||
@Override
|
||||
public void tearDown() throws Exception {
|
||||
super.tearDown();
|
||||
threadPool.shutdown();
|
||||
mockClient.close();
|
||||
}
|
||||
|
||||
public void testOneUnconsumedParameters() throws Exception {
|
||||
final AtomicBoolean executed = new AtomicBoolean();
|
||||
|
@ -69,7 +87,7 @@ public class BaseRestHandlerTests extends ESTestCase {
|
|||
RestRequest request = new FakeRestRequest.Builder(xContentRegistry()).withParams(params).build();
|
||||
RestChannel channel = new FakeRestChannel(request, randomBoolean(), 1);
|
||||
final IllegalArgumentException e =
|
||||
expectThrows(IllegalArgumentException.class, () -> handler.handleRequest(request, channel, mock(NodeClient.class)));
|
||||
expectThrows(IllegalArgumentException.class, () -> handler.handleRequest(request, channel, mockClient));
|
||||
assertThat(e, hasToString(containsString("request [/] contains unrecognized parameter: [unconsumed]")));
|
||||
assertFalse(executed.get());
|
||||
}
|
||||
|
@ -101,7 +119,7 @@ public class BaseRestHandlerTests extends ESTestCase {
|
|||
RestRequest request = new FakeRestRequest.Builder(xContentRegistry()).withParams(params).build();
|
||||
RestChannel channel = new FakeRestChannel(request, randomBoolean(), 1);
|
||||
final IllegalArgumentException e =
|
||||
expectThrows(IllegalArgumentException.class, () -> handler.handleRequest(request, channel, mock(NodeClient.class)));
|
||||
expectThrows(IllegalArgumentException.class, () -> handler.handleRequest(request, channel, mockClient));
|
||||
assertThat(e, hasToString(containsString("request [/] contains unrecognized parameters: [unconsumed-first], [unconsumed-second]")));
|
||||
assertFalse(executed.get());
|
||||
}
|
||||
|
@ -145,7 +163,7 @@ public class BaseRestHandlerTests extends ESTestCase {
|
|||
RestRequest request = new FakeRestRequest.Builder(xContentRegistry()).withParams(params).build();
|
||||
RestChannel channel = new FakeRestChannel(request, randomBoolean(), 1);
|
||||
final IllegalArgumentException e =
|
||||
expectThrows(IllegalArgumentException.class, () -> handler.handleRequest(request, channel, mock(NodeClient.class)));
|
||||
expectThrows(IllegalArgumentException.class, () -> handler.handleRequest(request, channel, mockClient));
|
||||
assertThat(
|
||||
e,
|
||||
hasToString(containsString(
|
||||
|
@ -188,7 +206,7 @@ public class BaseRestHandlerTests extends ESTestCase {
|
|||
params.put("response_param", randomAlphaOfLength(8));
|
||||
RestRequest request = new FakeRestRequest.Builder(xContentRegistry()).withParams(params).build();
|
||||
RestChannel channel = new FakeRestChannel(request, randomBoolean(), 1);
|
||||
handler.handleRequest(request, channel, mock(NodeClient.class));
|
||||
handler.handleRequest(request, channel, mockClient);
|
||||
assertTrue(executed.get());
|
||||
}
|
||||
|
||||
|
@ -218,7 +236,7 @@ public class BaseRestHandlerTests extends ESTestCase {
|
|||
params.put("human", null);
|
||||
RestRequest request = new FakeRestRequest.Builder(xContentRegistry()).withParams(params).build();
|
||||
RestChannel channel = new FakeRestChannel(request, randomBoolean(), 1);
|
||||
handler.handleRequest(request, channel, mock(NodeClient.class));
|
||||
handler.handleRequest(request, channel, mockClient);
|
||||
assertTrue(executed.get());
|
||||
}
|
||||
|
||||
|
@ -262,7 +280,7 @@ public class BaseRestHandlerTests extends ESTestCase {
|
|||
params.put("time", randomAlphaOfLength(8));
|
||||
RestRequest request = new FakeRestRequest.Builder(xContentRegistry()).withParams(params).build();
|
||||
RestChannel channel = new FakeRestChannel(request, randomBoolean(), 1);
|
||||
handler.handleRequest(request, channel, mock(NodeClient.class));
|
||||
handler.handleRequest(request, channel, mockClient);
|
||||
assertTrue(executed.get());
|
||||
}
|
||||
|
||||
|
@ -291,7 +309,7 @@ public class BaseRestHandlerTests extends ESTestCase {
|
|||
.withContent(new BytesArray(builder.toString()), XContentType.JSON)
|
||||
.build();
|
||||
final RestChannel channel = new FakeRestChannel(request, randomBoolean(), 1);
|
||||
handler.handleRequest(request, channel, mock(NodeClient.class));
|
||||
handler.handleRequest(request, channel, mockClient);
|
||||
assertTrue(executed.get());
|
||||
}
|
||||
}
|
||||
|
@ -317,7 +335,7 @@ public class BaseRestHandlerTests extends ESTestCase {
|
|||
|
||||
final RestRequest request = new FakeRestRequest.Builder(xContentRegistry()).build();
|
||||
final RestChannel channel = new FakeRestChannel(request, randomBoolean(), 1);
|
||||
handler.handleRequest(request, channel, mock(NodeClient.class));
|
||||
handler.handleRequest(request, channel, mockClient);
|
||||
assertTrue(executed.get());
|
||||
}
|
||||
|
||||
|
@ -346,7 +364,7 @@ public class BaseRestHandlerTests extends ESTestCase {
|
|||
.build();
|
||||
final RestChannel channel = new FakeRestChannel(request, randomBoolean(), 1);
|
||||
final IllegalArgumentException e =
|
||||
expectThrows(IllegalArgumentException.class, () -> handler.handleRequest(request, channel, mock(NodeClient.class)));
|
||||
expectThrows(IllegalArgumentException.class, () -> handler.handleRequest(request, channel, mockClient));
|
||||
assertThat(e, hasToString(containsString("request [GET /] does not support having a body")));
|
||||
assertFalse(executed.get());
|
||||
}
|
||||
|
|
|
@ -33,6 +33,7 @@ import org.elasticsearch.common.util.concurrent.ThreadContext;
|
|||
import org.elasticsearch.common.xcontent.NamedXContentRegistry;
|
||||
import org.elasticsearch.common.xcontent.XContentType;
|
||||
import org.elasticsearch.common.xcontent.yaml.YamlXContent;
|
||||
import org.elasticsearch.core.internal.io.IOUtils;
|
||||
import org.elasticsearch.http.HttpInfo;
|
||||
import org.elasticsearch.http.HttpRequest;
|
||||
import org.elasticsearch.http.HttpResponse;
|
||||
|
@ -40,8 +41,10 @@ import org.elasticsearch.http.HttpServerTransport;
|
|||
import org.elasticsearch.http.HttpStats;
|
||||
import org.elasticsearch.indices.breaker.HierarchyCircuitBreakerService;
|
||||
import org.elasticsearch.test.ESTestCase;
|
||||
import org.elasticsearch.test.client.NoOpNodeClient;
|
||||
import org.elasticsearch.test.rest.FakeRestRequest;
|
||||
import org.elasticsearch.usage.UsageService;
|
||||
import org.junit.After;
|
||||
import org.junit.Before;
|
||||
|
||||
import java.io.IOException;
|
||||
|
@ -75,6 +78,7 @@ public class RestControllerTests extends ESTestCase {
|
|||
private RestController restController;
|
||||
private HierarchyCircuitBreakerService circuitBreakerService;
|
||||
private UsageService usageService;
|
||||
private NodeClient client;
|
||||
|
||||
@Before
|
||||
public void setup() {
|
||||
|
@ -91,7 +95,8 @@ public class RestControllerTests extends ESTestCase {
|
|||
inFlightRequestsBreaker = circuitBreakerService.getBreaker(CircuitBreaker.IN_FLIGHT_REQUESTS);
|
||||
|
||||
HttpServerTransport httpServerTransport = new TestHttpServerTransport();
|
||||
restController = new RestController(Collections.emptySet(), null, null, circuitBreakerService, usageService);
|
||||
client = new NoOpNodeClient(this.getTestName());
|
||||
restController = new RestController(Collections.emptySet(), null, client, circuitBreakerService, usageService);
|
||||
restController.registerHandler(RestRequest.Method.GET, "/",
|
||||
(request, channel, client) -> channel.sendResponse(
|
||||
new BytesRestResponse(RestStatus.OK, BytesRestResponse.TEXT_CONTENT_TYPE, BytesArray.EMPTY)));
|
||||
|
@ -102,8 +107,13 @@ public class RestControllerTests extends ESTestCase {
|
|||
httpServerTransport.start();
|
||||
}
|
||||
|
||||
@After
|
||||
public void teardown() throws IOException {
|
||||
IOUtils.close(client);
|
||||
}
|
||||
|
||||
public void testApplyRelevantHeaders() throws Exception {
|
||||
final ThreadContext threadContext = new ThreadContext(Settings.EMPTY);
|
||||
final ThreadContext threadContext = client.threadPool().getThreadContext();
|
||||
Set<RestHeaderDefinition> headers = new HashSet<>(Arrays.asList(new RestHeaderDefinition("header.1", true),
|
||||
new RestHeaderDefinition("header.2", true)));
|
||||
final RestController restController = new RestController(headers, null, null, circuitBreakerService, usageService);
|
||||
|
@ -139,7 +149,7 @@ public class RestControllerTests extends ESTestCase {
|
|||
}
|
||||
|
||||
public void testRequestWithDisallowedMultiValuedHeader() {
|
||||
final ThreadContext threadContext = new ThreadContext(Settings.EMPTY);
|
||||
final ThreadContext threadContext = client.threadPool().getThreadContext();
|
||||
Set<RestHeaderDefinition> headers = new HashSet<>(Arrays.asList(new RestHeaderDefinition("header.1", true),
|
||||
new RestHeaderDefinition("header.2", false)));
|
||||
final RestController restController = new RestController(headers, null, null, circuitBreakerService, usageService);
|
||||
|
@ -153,10 +163,10 @@ public class RestControllerTests extends ESTestCase {
|
|||
}
|
||||
|
||||
public void testRequestWithDisallowedMultiValuedHeaderButSameValues() {
|
||||
final ThreadContext threadContext = new ThreadContext(Settings.EMPTY);
|
||||
final ThreadContext threadContext = client.threadPool().getThreadContext();
|
||||
Set<RestHeaderDefinition> headers = new HashSet<>(Arrays.asList(new RestHeaderDefinition("header.1", true),
|
||||
new RestHeaderDefinition("header.2", false)));
|
||||
final RestController restController = new RestController(headers, null, null, circuitBreakerService, usageService);
|
||||
final RestController restController = new RestController(headers, null, client, circuitBreakerService, usageService);
|
||||
Map<String, List<String>> restHeaders = new HashMap<>();
|
||||
restHeaders.put("header.1", Collections.singletonList("boo"));
|
||||
restHeaders.put("header.2", Arrays.asList("foo", "foo"));
|
||||
|
@ -237,11 +247,11 @@ public class RestControllerTests extends ESTestCase {
|
|||
h -> {
|
||||
assertSame(handler, h);
|
||||
return (RestRequest request, RestChannel channel, NodeClient client) -> wrapperCalled.set(true);
|
||||
}, null, circuitBreakerService, usageService);
|
||||
}, client, circuitBreakerService, usageService);
|
||||
restController.registerHandler(RestRequest.Method.GET, "/wrapped", handler);
|
||||
RestRequest request = testRestRequest("/wrapped", "{}", XContentType.JSON);
|
||||
AssertingChannel channel = new AssertingChannel(request, true, RestStatus.BAD_REQUEST);
|
||||
restController.dispatchRequest(request, channel, new ThreadContext(Settings.EMPTY));
|
||||
restController.dispatchRequest(request, channel, client.threadPool().getThreadContext());
|
||||
httpServerTransport.start();
|
||||
assertTrue(wrapperCalled.get());
|
||||
assertFalse(handlerCalled.get());
|
||||
|
@ -253,7 +263,7 @@ public class RestControllerTests extends ESTestCase {
|
|||
RestRequest request = testRestRequest("/", content, XContentType.JSON);
|
||||
AssertingChannel channel = new AssertingChannel(request, true, RestStatus.OK);
|
||||
|
||||
restController.dispatchRequest(request, channel, new ThreadContext(Settings.EMPTY));
|
||||
restController.dispatchRequest(request, channel, client.threadPool().getThreadContext());
|
||||
|
||||
assertEquals(0, inFlightRequestsBreaker.getTrippedCount());
|
||||
assertEquals(0, inFlightRequestsBreaker.getUsed());
|
||||
|
@ -265,7 +275,7 @@ public class RestControllerTests extends ESTestCase {
|
|||
RestRequest request = testRestRequest("/error", content, XContentType.JSON);
|
||||
AssertingChannel channel = new AssertingChannel(request, true, RestStatus.BAD_REQUEST);
|
||||
|
||||
restController.dispatchRequest(request, channel, new ThreadContext(Settings.EMPTY));
|
||||
restController.dispatchRequest(request, channel, client.threadPool().getThreadContext());
|
||||
|
||||
assertEquals(0, inFlightRequestsBreaker.getTrippedCount());
|
||||
assertEquals(0, inFlightRequestsBreaker.getUsed());
|
||||
|
@ -278,7 +288,7 @@ public class RestControllerTests extends ESTestCase {
|
|||
RestRequest request = testRestRequest("/error", content, XContentType.JSON);
|
||||
ExceptionThrowingChannel channel = new ExceptionThrowingChannel(request, true);
|
||||
|
||||
restController.dispatchRequest(request, channel, new ThreadContext(Settings.EMPTY));
|
||||
restController.dispatchRequest(request, channel, client.threadPool().getThreadContext());
|
||||
|
||||
assertEquals(0, inFlightRequestsBreaker.getTrippedCount());
|
||||
assertEquals(0, inFlightRequestsBreaker.getUsed());
|
||||
|
@ -290,7 +300,7 @@ public class RestControllerTests extends ESTestCase {
|
|||
RestRequest request = testRestRequest("/", content, XContentType.JSON);
|
||||
AssertingChannel channel = new AssertingChannel(request, true, RestStatus.TOO_MANY_REQUESTS);
|
||||
|
||||
restController.dispatchRequest(request, channel, new ThreadContext(Settings.EMPTY));
|
||||
restController.dispatchRequest(request, channel, client.threadPool().getThreadContext());
|
||||
|
||||
assertEquals(1, inFlightRequestsBreaker.getTrippedCount());
|
||||
assertEquals(0, inFlightRequestsBreaker.getUsed());
|
||||
|
@ -306,7 +316,7 @@ public class RestControllerTests extends ESTestCase {
|
|||
new BytesRestResponse(RestStatus.OK, BytesRestResponse.TEXT_CONTENT_TYPE, BytesArray.EMPTY)));
|
||||
|
||||
assertFalse(channel.getSendResponseCalled());
|
||||
restController.dispatchRequest(request, channel, new ThreadContext(Settings.EMPTY));
|
||||
restController.dispatchRequest(request, channel, client.threadPool().getThreadContext());
|
||||
assertTrue(channel.getSendResponseCalled());
|
||||
}
|
||||
|
||||
|
@ -315,7 +325,7 @@ public class RestControllerTests extends ESTestCase {
|
|||
AssertingChannel channel = new AssertingChannel(fakeRestRequest, true, RestStatus.OK);
|
||||
|
||||
assertFalse(channel.getSendResponseCalled());
|
||||
restController.dispatchRequest(fakeRestRequest, channel, new ThreadContext(Settings.EMPTY));
|
||||
restController.dispatchRequest(fakeRestRequest, channel, client.threadPool().getThreadContext());
|
||||
assertTrue(channel.getSendResponseCalled());
|
||||
}
|
||||
|
||||
|
@ -333,7 +343,7 @@ public class RestControllerTests extends ESTestCase {
|
|||
});
|
||||
|
||||
assertFalse(channel.getSendResponseCalled());
|
||||
restController.dispatchRequest(fakeRestRequest, channel, new ThreadContext(Settings.EMPTY));
|
||||
restController.dispatchRequest(fakeRestRequest, channel, client.threadPool().getThreadContext());
|
||||
assertTrue(channel.getSendResponseCalled());
|
||||
}
|
||||
|
||||
|
@ -344,7 +354,7 @@ public class RestControllerTests extends ESTestCase {
|
|||
AssertingChannel channel = new AssertingChannel(fakeRestRequest, true, RestStatus.NOT_ACCEPTABLE);
|
||||
|
||||
assertFalse(channel.getSendResponseCalled());
|
||||
restController.dispatchRequest(fakeRestRequest, channel, new ThreadContext(Settings.EMPTY));
|
||||
restController.dispatchRequest(fakeRestRequest, channel, client.threadPool().getThreadContext());
|
||||
assertTrue(channel.getSendResponseCalled());
|
||||
}
|
||||
|
||||
|
@ -368,7 +378,7 @@ public class RestControllerTests extends ESTestCase {
|
|||
});
|
||||
|
||||
assertFalse(channel.getSendResponseCalled());
|
||||
restController.dispatchRequest(fakeRestRequest, channel, new ThreadContext(Settings.EMPTY));
|
||||
restController.dispatchRequest(fakeRestRequest, channel, client.threadPool().getThreadContext());
|
||||
assertTrue(channel.getSendResponseCalled());
|
||||
}
|
||||
|
||||
|
@ -393,7 +403,7 @@ public class RestControllerTests extends ESTestCase {
|
|||
});
|
||||
|
||||
assertFalse(channel.getSendResponseCalled());
|
||||
restController.dispatchRequest(fakeRestRequest, channel, new ThreadContext(Settings.EMPTY));
|
||||
restController.dispatchRequest(fakeRestRequest, channel, client.threadPool().getThreadContext());
|
||||
assertTrue(channel.getSendResponseCalled());
|
||||
}
|
||||
|
||||
|
@ -414,7 +424,7 @@ public class RestControllerTests extends ESTestCase {
|
|||
});
|
||||
|
||||
assertFalse(channel.getSendResponseCalled());
|
||||
restController.dispatchRequest(fakeRestRequest, channel, new ThreadContext(Settings.EMPTY));
|
||||
restController.dispatchRequest(fakeRestRequest, channel, client.threadPool().getThreadContext());
|
||||
assertTrue(channel.getSendResponseCalled());
|
||||
}
|
||||
|
||||
|
@ -435,7 +445,7 @@ public class RestControllerTests extends ESTestCase {
|
|||
}
|
||||
});
|
||||
assertFalse(channel.getSendResponseCalled());
|
||||
restController.dispatchRequest(fakeRestRequest, channel, new ThreadContext(Settings.EMPTY));
|
||||
restController.dispatchRequest(fakeRestRequest, channel, client.threadPool().getThreadContext());
|
||||
assertTrue(channel.getSendResponseCalled());
|
||||
}
|
||||
|
||||
|
@ -457,7 +467,7 @@ public class RestControllerTests extends ESTestCase {
|
|||
}
|
||||
});
|
||||
assertFalse(channel.getSendResponseCalled());
|
||||
restController.dispatchRequest(fakeRestRequest, channel, new ThreadContext(Settings.EMPTY));
|
||||
restController.dispatchRequest(fakeRestRequest, channel, client.threadPool().getThreadContext());
|
||||
assertTrue(channel.getSendResponseCalled());
|
||||
}
|
||||
|
||||
|
@ -466,7 +476,7 @@ public class RestControllerTests extends ESTestCase {
|
|||
final AssertingChannel channel = new AssertingChannel(fakeRestRequest, true, RestStatus.BAD_REQUEST);
|
||||
restController.dispatchBadRequest(
|
||||
channel,
|
||||
new ThreadContext(Settings.EMPTY),
|
||||
client.threadPool().getThreadContext(),
|
||||
randomBoolean() ? new IllegalStateException("bad request") : new Throwable("bad request"));
|
||||
assertTrue(channel.getSendResponseCalled());
|
||||
assertThat(channel.getRestResponse().content().utf8ToString(), containsString("bad request"));
|
||||
|
@ -475,7 +485,7 @@ public class RestControllerTests extends ESTestCase {
|
|||
public void testDispatchBadRequestUnknownCause() {
|
||||
final FakeRestRequest fakeRestRequest = new FakeRestRequest.Builder(NamedXContentRegistry.EMPTY).build();
|
||||
final AssertingChannel channel = new AssertingChannel(fakeRestRequest, true, RestStatus.BAD_REQUEST);
|
||||
restController.dispatchBadRequest(channel, new ThreadContext(Settings.EMPTY), null);
|
||||
restController.dispatchBadRequest(channel, client.threadPool().getThreadContext(), null);
|
||||
assertTrue(channel.getSendResponseCalled());
|
||||
assertThat(channel.getRestResponse().content().utf8ToString(), containsString("unknown cause"));
|
||||
}
|
||||
|
@ -486,7 +496,7 @@ public class RestControllerTests extends ESTestCase {
|
|||
.withPath("/favicon.ico")
|
||||
.build();
|
||||
final AssertingChannel channel = new AssertingChannel(fakeRestRequest, false, RestStatus.OK);
|
||||
restController.dispatchRequest(fakeRestRequest, channel, new ThreadContext(Settings.EMPTY));
|
||||
restController.dispatchRequest(fakeRestRequest, channel, client.threadPool().getThreadContext());
|
||||
assertTrue(channel.getSendResponseCalled());
|
||||
assertThat(channel.getRestResponse().contentType(), containsString("image/x-icon"));
|
||||
}
|
||||
|
@ -498,7 +508,7 @@ public class RestControllerTests extends ESTestCase {
|
|||
.withPath("/favicon.ico")
|
||||
.build();
|
||||
final AssertingChannel channel = new AssertingChannel(fakeRestRequest, true, RestStatus.METHOD_NOT_ALLOWED);
|
||||
restController.dispatchRequest(fakeRestRequest, channel, new ThreadContext(Settings.EMPTY));
|
||||
restController.dispatchRequest(fakeRestRequest, channel, client.threadPool().getThreadContext());
|
||||
assertTrue(channel.getSendResponseCalled());
|
||||
assertThat(channel.getRestResponse().getHeaders().containsKey("Allow"), equalTo(true));
|
||||
assertThat(channel.getRestResponse().getHeaders().get("Allow"), hasItem(equalTo(RestRequest.Method.GET.toString())));
|
||||
|
@ -571,7 +581,7 @@ public class RestControllerTests extends ESTestCase {
|
|||
|
||||
final AssertingChannel channel = new AssertingChannel(request, true, RestStatus.METHOD_NOT_ALLOWED);
|
||||
assertFalse(channel.getSendResponseCalled());
|
||||
restController.dispatchRequest(request, channel, new ThreadContext(Settings.EMPTY));
|
||||
restController.dispatchRequest(request, channel, client.threadPool().getThreadContext());
|
||||
assertTrue(channel.getSendResponseCalled());
|
||||
assertThat(channel.getRestResponse().getHeaders().containsKey("Allow"), equalTo(true));
|
||||
assertThat(channel.getRestResponse().getHeaders().get("Allow"), hasItem(equalTo(RestRequest.Method.GET.toString())));
|
||||
|
|
|
@ -19,6 +19,7 @@
|
|||
package org.elasticsearch.rest.action.admin.indices;
|
||||
|
||||
import org.elasticsearch.action.admin.indices.analyze.AnalyzeAction;
|
||||
import org.elasticsearch.client.node.NodeClient;
|
||||
import org.elasticsearch.common.bytes.BytesArray;
|
||||
import org.elasticsearch.common.xcontent.XContentFactory;
|
||||
import org.elasticsearch.common.xcontent.XContentParser;
|
||||
|
@ -26,6 +27,7 @@ import org.elasticsearch.common.xcontent.XContentType;
|
|||
import org.elasticsearch.index.analysis.NameOrDefinition;
|
||||
import org.elasticsearch.rest.RestRequest;
|
||||
import org.elasticsearch.test.ESTestCase;
|
||||
import org.elasticsearch.test.client.NoOpNodeClient;
|
||||
import org.elasticsearch.test.rest.FakeRestRequest;
|
||||
|
||||
import java.io.IOException;
|
||||
|
@ -95,8 +97,10 @@ public class RestAnalyzeActionTests extends ESTestCase {
|
|||
RestAnalyzeAction action = new RestAnalyzeAction();
|
||||
RestRequest request = new FakeRestRequest.Builder(xContentRegistry())
|
||||
.withContent(new BytesArray("{invalid_json}"), XContentType.JSON).build();
|
||||
IOException e = expectThrows(IOException.class, () -> action.handleRequest(request, null, null));
|
||||
assertThat(e.getMessage(), containsString("expecting double-quote"));
|
||||
try (NodeClient client = new NoOpNodeClient(this.getClass().getSimpleName())) {
|
||||
IOException e = expectThrows(IOException.class, () -> action.handleRequest(request, null, client));
|
||||
assertThat(e.getMessage(), containsString("expecting double-quote"));
|
||||
}
|
||||
}
|
||||
|
||||
public void testParseXContentForAnalyzeRequestWithUnknownParamThrowsException() throws Exception {
|
||||
|
|
|
@ -50,6 +50,10 @@ public class RestGetFieldMappingActionTests extends RestActionTestCase {
|
|||
params.put(INCLUDE_TYPE_NAME_PARAMETER, "false");
|
||||
path = "some_index/_mapping/field/some_field";
|
||||
}
|
||||
|
||||
// We're not actually testing anything to do with the client, but need to set this so it doesn't fail the test for being unset.
|
||||
verifyingClient.setExecuteVerifier((arg1, arg2) -> null);
|
||||
|
||||
RestRequest deprecatedRequest = new FakeRestRequest.Builder(xContentRegistry())
|
||||
.withMethod(RestRequest.Method.GET)
|
||||
.withPath(path)
|
||||
|
@ -76,6 +80,9 @@ public class RestGetFieldMappingActionTests extends RestActionTestCase {
|
|||
.withParams(params)
|
||||
.build();
|
||||
|
||||
// We're not actually testing anything to do with the client, but need to set this so it doesn't fail the test for being unset.
|
||||
verifyingClient.setExecuteVerifier((arg1, arg2) -> null);
|
||||
|
||||
FakeRestChannel channel = new FakeRestChannel(request, false, 1);
|
||||
ThreadContext threadContext = new ThreadContext(Settings.EMPTY);
|
||||
controller().dispatchRequest(request, channel, threadContext);
|
||||
|
|
|
@ -78,6 +78,9 @@ public class RestGetMappingActionTests extends RestActionTestCase {
|
|||
.withParams(params)
|
||||
.build();
|
||||
|
||||
// We're not actually testing anything to do with the client, but need to set this so it doesn't fail the test for being unset.
|
||||
verifyingClient.setExecuteVerifier((arg1, arg2) -> null);
|
||||
|
||||
FakeRestChannel channel = new FakeRestChannel(request, false, 1);
|
||||
ThreadContext threadContext = new ThreadContext(Settings.EMPTY);
|
||||
controller().dispatchRequest(request, channel, threadContext);
|
||||
|
@ -98,6 +101,9 @@ public class RestGetMappingActionTests extends RestActionTestCase {
|
|||
.withPath("/some_index/_mappings")
|
||||
.build();
|
||||
|
||||
// We're not actually testing anything to do with the client, but need to set this so it doesn't fail the test for being unset.
|
||||
verifyingClient.setExecuteVerifier((arg1, arg2) -> null);
|
||||
|
||||
FakeRestChannel channel = new FakeRestChannel(request, false, 1);
|
||||
ThreadContext threadContext = new ThreadContext(Settings.EMPTY);
|
||||
controller().dispatchRequest(request, channel, threadContext);
|
||||
|
|
|
@ -18,9 +18,9 @@
|
|||
*/
|
||||
package org.elasticsearch.rest.action.admin.indices;
|
||||
|
||||
import org.elasticsearch.action.ActionType;
|
||||
import org.elasticsearch.action.ActionListener;
|
||||
import org.elasticsearch.action.ActionRequest;
|
||||
import org.elasticsearch.action.ActionType;
|
||||
import org.elasticsearch.action.admin.indices.validate.query.ValidateQueryAction;
|
||||
import org.elasticsearch.action.support.ActionFilters;
|
||||
import org.elasticsearch.action.support.TransportAction;
|
||||
|
@ -43,6 +43,7 @@ import org.elasticsearch.threadpool.TestThreadPool;
|
|||
import org.elasticsearch.threadpool.ThreadPool;
|
||||
import org.elasticsearch.usage.UsageService;
|
||||
import org.junit.AfterClass;
|
||||
import org.junit.Before;
|
||||
import org.junit.BeforeClass;
|
||||
|
||||
import java.util.Collections;
|
||||
|
@ -87,6 +88,12 @@ public class RestValidateQueryActionTests extends AbstractSearchTestCase {
|
|||
controller.registerHandler(action);
|
||||
}
|
||||
|
||||
@Before
|
||||
public void ensureCleanContext() {
|
||||
// Make sure we have a clean context for each test
|
||||
threadPool.getThreadContext().stashContext();
|
||||
}
|
||||
|
||||
@AfterClass
|
||||
public static void terminateThreadPool() throws InterruptedException {
|
||||
terminate(threadPool);
|
||||
|
|
|
@ -19,8 +19,11 @@
|
|||
|
||||
package org.elasticsearch.rest.action.document;
|
||||
|
||||
import org.apache.lucene.util.SetOnce;
|
||||
import org.elasticsearch.Version;
|
||||
import org.elasticsearch.action.ActionListener;
|
||||
import org.elasticsearch.action.bulk.BulkRequest;
|
||||
import org.elasticsearch.action.bulk.BulkResponse;
|
||||
import org.elasticsearch.action.update.UpdateRequest;
|
||||
import org.elasticsearch.client.node.NodeClient;
|
||||
import org.elasticsearch.common.bytes.BytesArray;
|
||||
|
@ -28,15 +31,14 @@ import org.elasticsearch.common.xcontent.XContentType;
|
|||
import org.elasticsearch.rest.RestChannel;
|
||||
import org.elasticsearch.rest.RestRequest;
|
||||
import org.elasticsearch.test.ESTestCase;
|
||||
import org.elasticsearch.test.client.NoOpNodeClient;
|
||||
import org.elasticsearch.test.rest.FakeRestRequest;
|
||||
import org.hamcrest.CustomMatcher;
|
||||
import org.mockito.Mockito;
|
||||
|
||||
import java.util.HashMap;
|
||||
import java.util.Map;
|
||||
|
||||
import static org.mockito.Matchers.any;
|
||||
import static org.mockito.Matchers.argThat;
|
||||
import static org.hamcrest.Matchers.equalTo;
|
||||
import static org.hamcrest.Matchers.hasSize;
|
||||
import static org.mockito.Mockito.mock;
|
||||
|
||||
/**
|
||||
|
@ -45,32 +47,34 @@ import static org.mockito.Mockito.mock;
|
|||
public class RestBulkActionTests extends ESTestCase {
|
||||
|
||||
public void testBulkPipelineUpsert() throws Exception {
|
||||
final NodeClient mockClient = mock(NodeClient.class);
|
||||
final Map<String, String> params = new HashMap<>();
|
||||
params.put("pipeline", "timestamps");
|
||||
new RestBulkAction(settings(Version.CURRENT).build())
|
||||
.handleRequest(
|
||||
new FakeRestRequest.Builder(
|
||||
xContentRegistry()).withPath("my_index/_bulk").withParams(params)
|
||||
.withContent(
|
||||
new BytesArray(
|
||||
"{\"index\":{\"_id\":\"1\"}}\n" +
|
||||
"{\"field1\":\"val1\"}\n" +
|
||||
"{\"update\":{\"_id\":\"2\"}}\n" +
|
||||
"{\"script\":{\"source\":\"ctx._source.counter++;\"},\"upsert\":{\"field1\":\"upserted_val\"}}\n"
|
||||
),
|
||||
XContentType.JSON
|
||||
).withMethod(RestRequest.Method.POST).build(),
|
||||
mock(RestChannel.class), mockClient
|
||||
);
|
||||
Mockito.verify(mockClient)
|
||||
.bulk(argThat(new CustomMatcher<BulkRequest>("Pipeline in upsert request") {
|
||||
@Override
|
||||
public boolean matches(final Object item) {
|
||||
BulkRequest request = (BulkRequest) item;
|
||||
UpdateRequest update = (UpdateRequest) request.requests().get(1);
|
||||
return "timestamps".equals(update.upsertRequest().getPipeline());
|
||||
}
|
||||
}), any());
|
||||
SetOnce<Boolean> bulkCalled = new SetOnce<>();
|
||||
try (NodeClient verifyingClient = new NoOpNodeClient(this.getTestName()) {
|
||||
@Override
|
||||
public void bulk(BulkRequest request, ActionListener<BulkResponse> listener) {
|
||||
bulkCalled.set(true);
|
||||
assertThat(request.requests(), hasSize(2));
|
||||
UpdateRequest updateRequest = (UpdateRequest) request.requests().get(1);
|
||||
assertThat(updateRequest.upsertRequest().getPipeline(), equalTo("timestamps"));
|
||||
}
|
||||
}) {
|
||||
final Map<String, String> params = new HashMap<>();
|
||||
params.put("pipeline", "timestamps");
|
||||
new RestBulkAction(settings(Version.CURRENT).build())
|
||||
.handleRequest(
|
||||
new FakeRestRequest.Builder(
|
||||
xContentRegistry()).withPath("my_index/_bulk").withParams(params)
|
||||
.withContent(
|
||||
new BytesArray(
|
||||
"{\"index\":{\"_id\":\"1\"}}\n" +
|
||||
"{\"field1\":\"val1\"}\n" +
|
||||
"{\"update\":{\"_id\":\"2\"}}\n" +
|
||||
"{\"script\":{\"source\":\"ctx._source.counter++;\"},\"upsert\":{\"field1\":\"upserted_val\"}}\n"
|
||||
),
|
||||
XContentType.JSON
|
||||
).withMethod(RestRequest.Method.POST).build(),
|
||||
mock(RestChannel.class), verifyingClient
|
||||
);
|
||||
assertThat(bulkCalled.get(), equalTo(true));
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
|
@ -21,8 +21,8 @@ package org.elasticsearch.rest.action.document;
|
|||
|
||||
import org.elasticsearch.rest.RestRequest;
|
||||
import org.elasticsearch.rest.RestRequest.Method;
|
||||
import org.elasticsearch.test.rest.RestActionTestCase;
|
||||
import org.elasticsearch.test.rest.FakeRestRequest;
|
||||
import org.elasticsearch.test.rest.RestActionTestCase;
|
||||
import org.junit.Before;
|
||||
|
||||
public class RestDeleteActionTests extends RestActionTestCase {
|
||||
|
@ -33,6 +33,9 @@ public class RestDeleteActionTests extends RestActionTestCase {
|
|||
}
|
||||
|
||||
public void testTypeInPath() {
|
||||
// We're not actually testing anything to do with the client, but need to set this so it doesn't fail the test for being unset.
|
||||
verifyingClient.setExecuteVerifier((arg1, arg2) -> null);
|
||||
|
||||
RestRequest deprecatedRequest = new FakeRestRequest.Builder(xContentRegistry())
|
||||
.withMethod(Method.DELETE)
|
||||
.withPath("/some_index/some_type/some_id")
|
||||
|
|
|
@ -32,6 +32,9 @@ public class RestGetActionTests extends RestActionTestCase {
|
|||
}
|
||||
|
||||
public void testTypeInPathWithGet() {
|
||||
// We're not actually testing anything to do with the client, but need to set this so it doesn't fail the test for being unset.
|
||||
verifyingClient.setExecuteVerifier((arg1, arg2) -> null);
|
||||
|
||||
FakeRestRequest.Builder deprecatedRequest = new FakeRestRequest.Builder(xContentRegistry())
|
||||
.withPath("/some_index/some_type/some_id");
|
||||
dispatchRequest(deprecatedRequest.withMethod(Method.GET).build());
|
||||
|
@ -43,6 +46,9 @@ public class RestGetActionTests extends RestActionTestCase {
|
|||
}
|
||||
|
||||
public void testTypeInPathWithHead() {
|
||||
// We're not actually testing anything to do with the client, but need to set this so it doesn't fail the test for being unset.
|
||||
verifyingClient.setExecuteVerifier((arg1, arg2) -> null);
|
||||
|
||||
FakeRestRequest.Builder deprecatedRequest = new FakeRestRequest.Builder(xContentRegistry())
|
||||
.withPath("/some_index/some_type/some_id");
|
||||
dispatchRequest(deprecatedRequest.withMethod(Method.HEAD).build());
|
||||
|
|
|
@ -23,6 +23,7 @@ import org.elasticsearch.ResourceNotFoundException;
|
|||
import org.elasticsearch.action.get.GetResponse;
|
||||
import org.elasticsearch.common.bytes.BytesArray;
|
||||
import org.elasticsearch.common.bytes.BytesReference;
|
||||
import org.elasticsearch.common.util.concurrent.ThreadContext;
|
||||
import org.elasticsearch.index.get.GetResult;
|
||||
import org.elasticsearch.rest.RestRequest;
|
||||
import org.elasticsearch.rest.RestRequest.Method;
|
||||
|
@ -65,13 +66,19 @@ public class RestGetSourceActionTests extends RestActionTestCase {
|
|||
* test deprecation is logged if type is used in path
|
||||
*/
|
||||
public void testTypeInPath() {
|
||||
// We're not actually testing anything to do with the client, but need to set this so it doesn't fail the test for being unset.
|
||||
verifyingClient.setExecuteVerifier((arg1, arg2) -> null);
|
||||
for (Method method : Arrays.asList(Method.GET, Method.HEAD)) {
|
||||
RestRequest request = new FakeRestRequest.Builder(xContentRegistry())
|
||||
// Ensure we have a fresh context for each request so we don't get duplicate headers
|
||||
try (ThreadContext.StoredContext ignore = verifyingClient.threadPool().getThreadContext().stashContext()) {
|
||||
RestRequest request = new FakeRestRequest.Builder(xContentRegistry())
|
||||
.withMethod(method)
|
||||
.withPath("/some_index/some_type/id/_source")
|
||||
.build();
|
||||
dispatchRequest(request);
|
||||
assertWarnings(RestGetSourceAction.TYPES_DEPRECATION_MESSAGE);
|
||||
|
||||
dispatchRequest(request);
|
||||
assertWarnings(RestGetSourceAction.TYPES_DEPRECATION_MESSAGE);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -79,9 +86,13 @@ public class RestGetSourceActionTests extends RestActionTestCase {
|
|||
* test deprecation is logged if type is used as parameter
|
||||
*/
|
||||
public void testTypeParameter() {
|
||||
// We're not actually testing anything to do with the client, but need to set this so it doesn't fail the test for being unset.
|
||||
verifyingClient.setExecuteVerifier((arg1, arg2) -> null);
|
||||
Map<String, String> params = new HashMap<>();
|
||||
params.put("type", "some_type");
|
||||
for (Method method : Arrays.asList(Method.GET, Method.HEAD)) {
|
||||
// Ensure we have a fresh context for each request so we don't get duplicate headers
|
||||
try (ThreadContext.StoredContext ignore = verifyingClient.threadPool().getThreadContext().stashContext()) {
|
||||
RestRequest request = new FakeRestRequest.Builder(xContentRegistry())
|
||||
.withMethod(method)
|
||||
.withPath("/some_index/_source/id")
|
||||
|
@ -89,6 +100,7 @@ public class RestGetSourceActionTests extends RestActionTestCase {
|
|||
.build();
|
||||
dispatchRequest(request);
|
||||
assertWarnings(RestGetSourceAction.TYPES_DEPRECATION_MESSAGE);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
|
|
|
@ -19,8 +19,8 @@
|
|||
|
||||
package org.elasticsearch.rest.action.document;
|
||||
|
||||
import org.apache.lucene.util.SetOnce;
|
||||
import org.elasticsearch.Version;
|
||||
import org.elasticsearch.action.ActionListener;
|
||||
import org.elasticsearch.action.DocWriteRequest;
|
||||
import org.elasticsearch.action.index.IndexRequest;
|
||||
import org.elasticsearch.cluster.ClusterName;
|
||||
|
@ -36,13 +36,11 @@ import org.elasticsearch.test.VersionUtils;
|
|||
import org.elasticsearch.test.rest.FakeRestRequest;
|
||||
import org.elasticsearch.test.rest.RestActionTestCase;
|
||||
import org.junit.Before;
|
||||
import org.mockito.ArgumentCaptor;
|
||||
|
||||
import java.util.concurrent.atomic.AtomicReference;
|
||||
|
||||
import static org.hamcrest.Matchers.equalTo;
|
||||
import static org.mockito.Matchers.any;
|
||||
import static org.mockito.Mockito.verify;
|
||||
import static org.hamcrest.Matchers.instanceOf;
|
||||
|
||||
public class RestIndexActionTests extends RestActionTestCase {
|
||||
|
||||
|
@ -106,6 +104,13 @@ public class RestIndexActionTests extends RestActionTestCase {
|
|||
}
|
||||
|
||||
private void checkAutoIdOpType(Version minClusterVersion, DocWriteRequest.OpType expectedOpType) {
|
||||
SetOnce<Boolean> executeCalled = new SetOnce<>();
|
||||
verifyingClient.setExecuteVerifier((actionType, request) -> {
|
||||
assertThat(request, instanceOf(IndexRequest.class));
|
||||
assertThat(((IndexRequest) request).opType(), equalTo(expectedOpType));
|
||||
executeCalled.set(true);
|
||||
return null;
|
||||
});
|
||||
RestRequest autoIdRequest = new FakeRestRequest.Builder(xContentRegistry())
|
||||
.withMethod(RestRequest.Method.POST)
|
||||
.withPath("/some_index/_doc")
|
||||
|
@ -116,9 +121,6 @@ public class RestIndexActionTests extends RestActionTestCase {
|
|||
.add(new DiscoveryNode("test", buildNewFakeTransportAddress(), minClusterVersion))
|
||||
.build()).build());
|
||||
dispatchRequest(autoIdRequest);
|
||||
ArgumentCaptor<IndexRequest> argumentCaptor = ArgumentCaptor.forClass(IndexRequest.class);
|
||||
verify(nodeClient).index(argumentCaptor.capture(), any(ActionListener.class));
|
||||
IndexRequest indexRequest = argumentCaptor.getValue();
|
||||
assertEquals(expectedOpType, indexRequest.opType());
|
||||
assertThat(executeCalled.get(), equalTo(true));
|
||||
}
|
||||
}
|
||||
|
|
|
@ -38,6 +38,9 @@ public class RestMultiGetActionTests extends RestActionTestCase {
|
|||
}
|
||||
|
||||
public void testTypeInPath() {
|
||||
// We're not actually testing anything to do with the client, but need to set this so it doesn't fail the test for being unset.
|
||||
verifyingClient.setExecuteVerifier((arg1, arg2) -> null);
|
||||
|
||||
RestRequest deprecatedRequest = new FakeRestRequest.Builder(xContentRegistry())
|
||||
.withMethod(Method.GET)
|
||||
.withPath("some_index/some_type/_mget")
|
||||
|
@ -67,6 +70,9 @@ public class RestMultiGetActionTests extends RestActionTestCase {
|
|||
.endArray()
|
||||
.endObject();
|
||||
|
||||
// We're not actually testing anything to do with the client, but need to set this so it doesn't fail the test for being unset.
|
||||
verifyingClient.setExecuteVerifier((arg1, arg2) -> null);
|
||||
|
||||
RestRequest request = new FakeRestRequest.Builder(xContentRegistry())
|
||||
.withPath("_mget")
|
||||
.withContent(BytesReference.bytes(content), XContentType.JSON)
|
||||
|
|
|
@ -25,8 +25,8 @@ import org.elasticsearch.common.xcontent.XContentFactory;
|
|||
import org.elasticsearch.common.xcontent.XContentType;
|
||||
import org.elasticsearch.rest.RestRequest;
|
||||
import org.elasticsearch.rest.RestRequest.Method;
|
||||
import org.elasticsearch.test.rest.RestActionTestCase;
|
||||
import org.elasticsearch.test.rest.FakeRestRequest;
|
||||
import org.elasticsearch.test.rest.RestActionTestCase;
|
||||
import org.junit.Before;
|
||||
|
||||
import java.io.IOException;
|
||||
|
@ -46,6 +46,9 @@ public class RestMultiTermVectorsActionTests extends RestActionTestCase {
|
|||
.withPath("/some_index/some_type/_mtermvectors")
|
||||
.build();
|
||||
|
||||
// We're not actually testing anything to do with the client, but need to set this so it doesn't fail the test for being unset.
|
||||
verifyingClient.setExecuteVerifier((arg1, arg2) -> null);
|
||||
|
||||
dispatchRequest(request);
|
||||
assertWarnings(RestMultiTermVectorsAction.TYPES_DEPRECATION_MESSAGE);
|
||||
}
|
||||
|
@ -60,6 +63,9 @@ public class RestMultiTermVectorsActionTests extends RestActionTestCase {
|
|||
.withParams(params)
|
||||
.build();
|
||||
|
||||
// We're not actually testing anything to do with the client, but need to set this so it doesn't fail the test for being unset.
|
||||
verifyingClient.setExecuteVerifier((arg1, arg2) -> null);
|
||||
|
||||
dispatchRequest(request);
|
||||
assertWarnings(RestMultiTermVectorsAction.TYPES_DEPRECATION_MESSAGE);
|
||||
}
|
||||
|
@ -80,6 +86,9 @@ public class RestMultiTermVectorsActionTests extends RestActionTestCase {
|
|||
.withContent(BytesReference.bytes(content), XContentType.JSON)
|
||||
.build();
|
||||
|
||||
// We're not actually testing anything to do with the client, but need to set this so it doesn't fail the test for being unset.
|
||||
verifyingClient.setExecuteVerifier((arg1, arg2) -> null);
|
||||
|
||||
dispatchRequest(request);
|
||||
assertWarnings(RestTermVectorsAction.TYPES_DEPRECATION_MESSAGE);
|
||||
}
|
||||
|
|
|
@ -25,8 +25,8 @@ import org.elasticsearch.common.xcontent.XContentFactory;
|
|||
import org.elasticsearch.common.xcontent.XContentType;
|
||||
import org.elasticsearch.rest.RestRequest;
|
||||
import org.elasticsearch.rest.RestRequest.Method;
|
||||
import org.elasticsearch.test.rest.RestActionTestCase;
|
||||
import org.elasticsearch.test.rest.FakeRestRequest;
|
||||
import org.elasticsearch.test.rest.RestActionTestCase;
|
||||
import org.junit.Before;
|
||||
|
||||
import java.io.IOException;
|
||||
|
@ -44,6 +44,9 @@ public class RestTermVectorsActionTests extends RestActionTestCase {
|
|||
.withPath("/some_index/some_type/some_id/_termvectors")
|
||||
.build();
|
||||
|
||||
// We're not actually testing anything to do with the client, but need to set this so it doesn't fail the test for being unset.
|
||||
verifyingClient.setExecuteVerifier((arg1, arg2) -> null);
|
||||
|
||||
dispatchRequest(request);
|
||||
assertWarnings(RestTermVectorsAction.TYPES_DEPRECATION_MESSAGE);
|
||||
}
|
||||
|
@ -60,6 +63,9 @@ public class RestTermVectorsActionTests extends RestActionTestCase {
|
|||
.withContent(BytesReference.bytes(content), XContentType.JSON)
|
||||
.build();
|
||||
|
||||
// We're not actually testing anything to do with the client, but need to set this so it doesn't fail the test for being unset.
|
||||
verifyingClient.setExecuteVerifier((arg1, arg2) -> null);
|
||||
|
||||
dispatchRequest(request);
|
||||
assertWarnings(RestTermVectorsAction.TYPES_DEPRECATION_MESSAGE);
|
||||
}
|
||||
|
|
|
@ -47,6 +47,9 @@ public class RestUpdateActionTests extends RestActionTestCase {
|
|||
}
|
||||
|
||||
public void testTypeInPath() {
|
||||
// We're not actually testing anything to do with the client, but need to set this so it doesn't fail the test for being unset.
|
||||
verifyingClient.setExecuteVerifier((arg1, arg2) -> null);
|
||||
|
||||
RestRequest deprecatedRequest = new FakeRestRequest.Builder(xContentRegistry())
|
||||
.withMethod(Method.POST)
|
||||
.withPath("/some_index/some_type/some_id/_update")
|
||||
|
|
|
@ -21,8 +21,8 @@ package org.elasticsearch.rest.action.search;
|
|||
|
||||
import org.elasticsearch.rest.RestRequest;
|
||||
import org.elasticsearch.rest.RestRequest.Method;
|
||||
import org.elasticsearch.test.rest.RestActionTestCase;
|
||||
import org.elasticsearch.test.rest.FakeRestRequest;
|
||||
import org.elasticsearch.test.rest.RestActionTestCase;
|
||||
import org.junit.Before;
|
||||
|
||||
import java.util.HashMap;
|
||||
|
@ -41,6 +41,9 @@ public class RestCountActionTests extends RestActionTestCase {
|
|||
.withPath("/some_index/some_type/_count")
|
||||
.build();
|
||||
|
||||
// We're not actually testing anything to do with the client, but need to set this so it doesn't fail the test for being unset.
|
||||
verifyingClient.setExecuteVerifier((arg1, arg2) -> null);
|
||||
|
||||
dispatchRequest(request);
|
||||
assertWarnings(RestCountAction.TYPES_DEPRECATION_MESSAGE);
|
||||
}
|
||||
|
@ -55,6 +58,9 @@ public class RestCountActionTests extends RestActionTestCase {
|
|||
.withParams(params)
|
||||
.build();
|
||||
|
||||
// We're not actually testing anything to do with the client, but need to set this so it doesn't fail the test for being unset.
|
||||
verifyingClient.setExecuteVerifier((arg1, arg2) -> null);
|
||||
|
||||
dispatchRequest(request);
|
||||
assertWarnings(RestCountAction.TYPES_DEPRECATION_MESSAGE);
|
||||
}
|
||||
|
|
|
@ -32,6 +32,9 @@ public class RestExplainActionTests extends RestActionTestCase {
|
|||
}
|
||||
|
||||
public void testTypeInPath() {
|
||||
// We're not actually testing anything to do with the client, but need to set this so it doesn't fail the test for being unset.
|
||||
verifyingClient.setExecuteVerifier((arg1, arg2) -> null);
|
||||
|
||||
RestRequest deprecatedRequest = new FakeRestRequest.Builder(xContentRegistry())
|
||||
.withMethod(RestRequest.Method.GET)
|
||||
.withPath("/some_index/some_type/some_id/_explain")
|
||||
|
|
|
@ -23,8 +23,8 @@ import org.elasticsearch.common.bytes.BytesArray;
|
|||
import org.elasticsearch.common.settings.Settings;
|
||||
import org.elasticsearch.common.xcontent.XContentType;
|
||||
import org.elasticsearch.rest.RestRequest;
|
||||
import org.elasticsearch.test.rest.RestActionTestCase;
|
||||
import org.elasticsearch.test.rest.FakeRestRequest;
|
||||
import org.elasticsearch.test.rest.RestActionTestCase;
|
||||
import org.junit.Before;
|
||||
|
||||
import java.nio.charset.StandardCharsets;
|
||||
|
@ -46,6 +46,9 @@ public class RestMultiSearchActionTests extends RestActionTestCase {
|
|||
.withContent(bytesContent, XContentType.JSON)
|
||||
.build();
|
||||
|
||||
// We're not actually testing anything to do with the client, but need to set this so it doesn't fail the test for being unset.
|
||||
verifyingClient.setExecuteLocallyVerifier((arg1, arg2) -> null);
|
||||
|
||||
dispatchRequest(request);
|
||||
assertWarnings(RestMultiSearchAction.TYPES_DEPRECATION_MESSAGE);
|
||||
}
|
||||
|
@ -60,6 +63,9 @@ public class RestMultiSearchActionTests extends RestActionTestCase {
|
|||
.withContent(bytesContent, XContentType.JSON)
|
||||
.build();
|
||||
|
||||
// We're not actually testing anything to do with the client, but need to set this so it doesn't fail the test for being unset.
|
||||
verifyingClient.setExecuteLocallyVerifier((arg1, arg2) -> null);
|
||||
|
||||
dispatchRequest(request);
|
||||
assertWarnings(RestMultiSearchAction.TYPES_DEPRECATION_MESSAGE);
|
||||
}
|
||||
|
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue