Move reindex from a plugin to a module
This commit is contained in:
parent
30107b4a74
commit
18808b7576
|
@ -15,11 +15,6 @@ The delete by query plugin adds support for deleting all of the documents
|
|||
replacement for the problematic _delete-by-query_ functionality which has been
|
||||
removed from Elasticsearch core.
|
||||
|
||||
<<plugins-reindex,Reindex>>::
|
||||
|
||||
The Reindex plugin adds support for updating documents matching a query and
|
||||
copying documents from one index to another.
|
||||
|
||||
[float]
|
||||
=== Community contributed API extension plugins
|
||||
|
||||
|
|
|
@ -1,183 +1,5 @@
|
|||
[[plugins-reindex]]
|
||||
=== Reindex Plugin
|
||||
|
||||
The reindex plugin adds two APIs:
|
||||
|
||||
* `_update_by_query` updates all documents matching a query in place.
|
||||
* `_reindex` copies documents from one index to another.
|
||||
|
||||
These APIs are siblings so they live in the same plugin. Both use
|
||||
{ref}/search-request-scroll.html[Scroll] and {ref}/docs-bulk.html[Bulk] APIs
|
||||
to send an index request per document. There are potential shortcuts that could
|
||||
speed this process so this plugin may change how this is done in the future.
|
||||
|
||||
[float]
|
||||
==== Installation
|
||||
|
||||
This plugin can be installed using the plugin manager:
|
||||
|
||||
[source,sh]
|
||||
----------------------------------------------------------------
|
||||
sudo bin/plugin install reindex
|
||||
----------------------------------------------------------------
|
||||
|
||||
The plugin must be installed on every node in the cluster, and each node must
|
||||
be restarted after installation.
|
||||
|
||||
[float]
|
||||
==== Removal
|
||||
|
||||
The plugin can be removed with the following command:
|
||||
|
||||
[source,sh]
|
||||
----------------------------------------------------------------
|
||||
sudo bin/plugin remove reindex
|
||||
----------------------------------------------------------------
|
||||
|
||||
The node must be stopped before removing the plugin.
|
||||
|
||||
[[update-by-query-usage]]
|
||||
==== Using `_update_by_query`
|
||||
|
||||
The simplest usage of `_update_by_query` just performs an update on every
|
||||
document in the index without changing the source. This is useful to
|
||||
<<picking-up-a-new-property,pick up a new property>> or some other online
|
||||
mapping change. Here is the API:
|
||||
|
||||
[source,js]
|
||||
--------------------------------------------------
|
||||
POST /twitter/_update_by_query?conflicts=proceed
|
||||
--------------------------------------------------
|
||||
// AUTOSENSE
|
||||
|
||||
That will return something like this:
|
||||
|
||||
[source,js]
|
||||
--------------------------------------------------
|
||||
{
|
||||
"took" : 639,
|
||||
"updated": 1235,
|
||||
"batches": 13,
|
||||
"version_conflicts": 2,
|
||||
"failures" : [ ]
|
||||
}
|
||||
--------------------------------------------------
|
||||
|
||||
`_update_by_query` gets a snapshot of the index when it starts and indexes what
|
||||
it finds using `internal` versioning. That means that you'll get a version
|
||||
conflict if the document changes between the time when the snapshot was taken
|
||||
and when the index request is processed. When the versions match the document
|
||||
is updated and the version number is incremented.
|
||||
|
||||
All update and query failures cause the `_update_by_query` to abort and are
|
||||
returned in the `failures` of the response. The updates that have been
|
||||
performed still stick. In other words, the process is not rolled back, only
|
||||
aborted. While the first failure causes the abort all failures that are
|
||||
returned by the failing bulk request are returned in the `failures` element so
|
||||
it's possible for there to be quite a few.
|
||||
|
||||
If you want to simply count version conflicts not cause the `_update_by_query`
|
||||
to abort you can set `conflicts=proceed` on the url or `"conflicts": "proceed"`
|
||||
in the request body. The first example does this because it is just trying to
|
||||
pick up an online mapping change and a version conflict simply means that the
|
||||
conflicting document was updated between the start of the `_update_by_query`
|
||||
and the time when it attempted to update the document. This is fine because
|
||||
that update will have picked up the online mapping update.
|
||||
|
||||
Back to the API format, you can limit `_update_by_query` to a single type. This
|
||||
will only update `tweet`s from the `twitter` index:
|
||||
|
||||
[source,js]
|
||||
--------------------------------------------------
|
||||
POST /twitter/tweet/_update_by_query?conflicts=proceed
|
||||
--------------------------------------------------
|
||||
// AUTOSENSE
|
||||
|
||||
You can also limit `_update_by_query` using the
|
||||
{ref}/query-dsl.html[Query DSL]. This will update all documents from the
|
||||
`twitter` index for the user `kimchy`:
|
||||
|
||||
[source,js]
|
||||
--------------------------------------------------
|
||||
POST /twitter/_update_by_query?conflicts=proceed
|
||||
{
|
||||
"query": { <1>
|
||||
"term": {
|
||||
"user": "kimchy"
|
||||
}
|
||||
}
|
||||
}
|
||||
--------------------------------------------------
|
||||
// AUTOSENSE
|
||||
|
||||
<1> The query must be passed as a value to the `query` key, in the same
|
||||
way as the {ref}/search-search.html[Search API]. You can also use the `q`
|
||||
parameter in the same way as the search api.
|
||||
|
||||
So far we've only been updating documents without changing their source. That
|
||||
is genuinely useful for things like
|
||||
<<picking-up-a-new-property,picking up new properties>> but it's only half the
|
||||
fun. `_update_by_query` supports a `script` object to update the document. This
|
||||
will increment the `likes` field on all of kimchy's tweets:
|
||||
[source,js]
|
||||
--------------------------------------------------
|
||||
POST /twitter/_update_by_query
|
||||
{
|
||||
"script": {
|
||||
"inline": "ctx._source.likes++"
|
||||
},
|
||||
"query": {
|
||||
"term": {
|
||||
"user": "kimchy"
|
||||
}
|
||||
}
|
||||
}
|
||||
--------------------------------------------------
|
||||
// AUTOSENSE
|
||||
|
||||
Just as in {ref}/docs-update.html[Update API] you can set `ctx.op = "noop"` if
|
||||
your script decides that it doesn't have to make any changes. That will cause
|
||||
`_update_by_query` to omit that document from its updates. Setting `ctx.op` to
|
||||
anything else is an error. If you want to delete by a query you can use the
|
||||
<<plugins-delete-by-query,Delete by Query Plugin>> instead. Setting any other
|
||||
field in `ctx` is an error.
|
||||
|
||||
Note that we stopped specifying `conflicts=proceed`. In this case we want a
|
||||
version conflict to abort the process so we can handle the failure.
|
||||
|
||||
This API doesn't allow you to move the documents it touches, just modify their
|
||||
source. This is intentional! We've made no provisions for removing the document
|
||||
from its original location.
|
||||
|
||||
It's also possible to do this whole thing on multiple indexes and multiple
|
||||
types at once, just like the search API:
|
||||
|
||||
[source,js]
|
||||
--------------------------------------------------
|
||||
POST /twitter,blog/tweet,post/_update_by_query
|
||||
--------------------------------------------------
|
||||
// AUTOSENSE
|
||||
|
||||
If you provide `routing` then the routing is copied to the scroll query,
|
||||
limiting the process to the shards that match that routing value:
|
||||
|
||||
[source,js]
|
||||
--------------------------------------------------
|
||||
POST /twitter/_update_by_query?routing=1
|
||||
--------------------------------------------------
|
||||
// AUTOSENSE
|
||||
|
||||
By default `_update_by_query` uses scroll batches of 100. You can change the
|
||||
batch size with the `scroll_size` URL parameter:
|
||||
|
||||
[source,js]
|
||||
--------------------------------------------------
|
||||
POST /twitter/_update_by_query?scroll_size=1000
|
||||
--------------------------------------------------
|
||||
// AUTOSENSE
|
||||
|
||||
[[reindex-usage]]
|
||||
==== Using `_reindex`
|
||||
[[docs-reindex]]
|
||||
==== Reindex API
|
||||
|
||||
`_reindex`'s most basic form just copies documents from one index to another.
|
||||
This will copy documents from `twitter` into `new_twitter`:
|
||||
|
@ -459,8 +281,8 @@ POST /_reindex
|
|||
[float]
|
||||
=== URL Parameters
|
||||
|
||||
In addition to the standard parameters like `pretty`, all APIs in this plugin
|
||||
support `refresh`, `wait_for_completion`, `consistency`, and `timeout`.
|
||||
In addition to the standard parameters like `pretty`, the Reindex API also
|
||||
supports `refresh`, `wait_for_completion`, `consistency`, and `timeout`.
|
||||
|
||||
Sending the `refresh` url parameter will cause all indexes to which the request
|
||||
wrote to be refreshed. This is different than the Index API's `refresh`
|
||||
|
@ -468,10 +290,10 @@ parameter which causes just the shard that received the new data to be indexed.
|
|||
|
||||
If the request contains `wait_for_completion=false` then Elasticsearch will
|
||||
perform some preflight checks, launch the request, and then return a `task`
|
||||
which can be used with {ref}/tasks.html[Tasks APIs] to cancel or get the status
|
||||
of the task. For now, once the request is finished the task is gone and the
|
||||
only place to look for the ultimate result of the task is in the Elasticsearch
|
||||
log file. This will be fixed soon.
|
||||
which can be used with <<docs-reindex-task-api,Tasks APIs>> to cancel or get
|
||||
the status of the task. For now, once the request is finished the task is gone
|
||||
and the only place to look for the ultimate result of the task is in the
|
||||
Elasticsearch log file. This will be fixed soon.
|
||||
|
||||
`consistency` controls how many copies of a shard must respond to each write
|
||||
request. `timeout` controls how long each write request waits for unavailable
|
||||
|
@ -491,10 +313,10 @@ The JSON response looks like this:
|
|||
{
|
||||
"took" : 639,
|
||||
"updated": 0,
|
||||
"created": 123,
|
||||
"batches": 1,
|
||||
"version_conflicts": 2,
|
||||
"failures" : [ ]
|
||||
"created": 123,
|
||||
}
|
||||
--------------------------------------------------
|
||||
|
||||
|
@ -506,14 +328,17 @@ The number of milliseconds from start to end of the whole operation.
|
|||
|
||||
The number of documents that were successfully updated.
|
||||
|
||||
`created`::
|
||||
|
||||
The number of documents that were successfully created.
|
||||
|
||||
`batches`::
|
||||
|
||||
The number of scroll responses pulled back by the the `_reindex` or
|
||||
`_update_by_query`.
|
||||
The number of scroll responses pulled back by the the reindex.
|
||||
|
||||
`version_conflicts`::
|
||||
|
||||
The number of version conflicts that the `_reindex_` or `_update_by_query` hit.
|
||||
The number of version conflicts that reindex hit.
|
||||
|
||||
`failures`::
|
||||
|
||||
|
@ -521,28 +346,16 @@ Array of all indexing failures. If this is non-empty then the request aborted
|
|||
because of those failures. See `conflicts` for how to prevent version conflicts
|
||||
from aborting the operation.
|
||||
|
||||
`created`::
|
||||
|
||||
The number of documents that were successfully created. This is not returned by
|
||||
`_update_by_query` because it isn't allowed to create documents.
|
||||
|
||||
[float]
|
||||
=== Response body
|
||||
[[docs-reindex-task-api]]
|
||||
=== Works with the Task API
|
||||
|
||||
While `_reindex` and `_update_by_query` are running you can fetch their status
|
||||
using the {ref}/task/list.html[Task List APIs]. This will fetch `_reindex`:
|
||||
While Reindex is running you can fetch their status using the
|
||||
{ref}/task/list.html[Task List APIs]:
|
||||
|
||||
[source,js]
|
||||
--------------------------------------------------
|
||||
POST /_tasks/*/*reindex?pretty&detailed=true
|
||||
--------------------------------------------------
|
||||
// AUTOSENSE
|
||||
|
||||
and this will fetch `_update_by_query`:
|
||||
|
||||
[source,js]
|
||||
--------------------------------------------------
|
||||
POST /_tasks/*/*byquery?pretty&detailed=true
|
||||
POST /_tasks/?pretty&detailed=true&actions=*reindex
|
||||
--------------------------------------------------
|
||||
// AUTOSENSE
|
||||
|
||||
|
@ -565,7 +378,7 @@ The responses looks like:
|
|||
"node" : "r1A2WoRbTwKZ516z6NEs5A",
|
||||
"id" : 36619,
|
||||
"type" : "transport",
|
||||
"action" : "indices:data/write/update/byquery",
|
||||
"action" : "indices:data/write/reindex",
|
||||
"status" : { <1>
|
||||
"total" : 6154,
|
||||
"updated" : 3500,
|
||||
|
@ -575,7 +388,7 @@ The responses looks like:
|
|||
"version_conflicts" : 0,
|
||||
"noops" : 0
|
||||
},
|
||||
"description" : "update-by-query [test][test]"
|
||||
"description" : ""
|
||||
} ]
|
||||
}
|
||||
}
|
||||
|
@ -592,104 +405,6 @@ will finish when their sum is equal to the `total` field.
|
|||
[float]
|
||||
=== Examples
|
||||
|
||||
Below are some examples of how you might use this plugin:
|
||||
|
||||
[[picking-up-a-new-property]]
|
||||
==== Pick up a new property
|
||||
|
||||
Say you created an index without dynamic mapping, filled it with data, and then
|
||||
added a mapping value to pick up more fields from the data:
|
||||
|
||||
[source,js]
|
||||
--------------------------------------------------
|
||||
PUT test
|
||||
{
|
||||
"mappings": {
|
||||
"test": {
|
||||
"dynamic": false, <1>
|
||||
"properties": {
|
||||
"text": {"type": "string"}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
POST test/test?refresh
|
||||
{
|
||||
"text": "words words",
|
||||
"flag": "bar"
|
||||
}'
|
||||
POST test/test?refresh
|
||||
{
|
||||
"text": "words words",
|
||||
"flag": "foo"
|
||||
}'
|
||||
PUT test/_mapping/test <2>
|
||||
{
|
||||
"properties": {
|
||||
"text": {"type": "string"},
|
||||
"flag": {"type": "string", "analyzer": "keyword"}
|
||||
}
|
||||
}
|
||||
--------------------------------------------------
|
||||
// AUTOSENSE
|
||||
|
||||
<1> This means that new fields won't be indexed, just stored in `_source`.
|
||||
|
||||
<2> This updates the mapping to add the new `flag` field. To pick up the new
|
||||
field you have to reindex all documents with it.
|
||||
|
||||
Searching for the data won't find anything:
|
||||
|
||||
[source,js]
|
||||
--------------------------------------------------
|
||||
POST test/_search?filter_path=hits.total
|
||||
{
|
||||
"query": {
|
||||
"match": {
|
||||
"flag": "foo"
|
||||
}
|
||||
}
|
||||
}
|
||||
--------------------------------------------------
|
||||
// AUTOSENSE
|
||||
|
||||
[source,js]
|
||||
--------------------------------------------------
|
||||
{
|
||||
"hits" : {
|
||||
"total" : 0
|
||||
}
|
||||
}
|
||||
--------------------------------------------------
|
||||
|
||||
But you can issue an `_update_by_query` request to pick up the new mapping:
|
||||
|
||||
[source,js]
|
||||
--------------------------------------------------
|
||||
POST test/_update_by_query?refresh&conflicts=proceed
|
||||
POST test/_search?filter_path=hits.total
|
||||
{
|
||||
"query": {
|
||||
"match": {
|
||||
"flag": "foo"
|
||||
}
|
||||
}
|
||||
}
|
||||
--------------------------------------------------
|
||||
// AUTOSENSE
|
||||
|
||||
[source,js]
|
||||
--------------------------------------------------
|
||||
{
|
||||
"hits" : {
|
||||
"total" : 1
|
||||
}
|
||||
}
|
||||
--------------------------------------------------
|
||||
|
||||
Hurray! You can do the exact same thing when adding a field to a multifield.
|
||||
|
||||
==== Change the name of a field
|
||||
|
||||
`_reindex` can be used to build a copy of an index with renamed fields. Say you
|
|
@ -0,0 +1,358 @@
|
|||
[[docs-update-by-query]]
|
||||
==== Update By Query API
|
||||
|
||||
The simplest usage of `_update_by_query` just performs an update on every
|
||||
document in the index without changing the source. This is useful to
|
||||
<<picking-up-a-new-property,pick up a new property>> or some other online
|
||||
mapping change. Here is the API:
|
||||
|
||||
[source,js]
|
||||
--------------------------------------------------
|
||||
POST /twitter/_update_by_query?conflicts=proceed
|
||||
--------------------------------------------------
|
||||
// AUTOSENSE
|
||||
|
||||
That will return something like this:
|
||||
|
||||
[source,js]
|
||||
--------------------------------------------------
|
||||
{
|
||||
"took" : 639,
|
||||
"updated": 1235,
|
||||
"batches": 13,
|
||||
"version_conflicts": 2,
|
||||
"failures" : [ ]
|
||||
}
|
||||
--------------------------------------------------
|
||||
|
||||
`_update_by_query` gets a snapshot of the index when it starts and indexes what
|
||||
it finds using `internal` versioning. That means that you'll get a version
|
||||
conflict if the document changes between the time when the snapshot was taken
|
||||
and when the index request is processed. When the versions match the document
|
||||
is updated and the version number is incremented.
|
||||
|
||||
All update and query failures cause the `_update_by_query` to abort and are
|
||||
returned in the `failures` of the response. The updates that have been
|
||||
performed still stick. In other words, the process is not rolled back, only
|
||||
aborted. While the first failure causes the abort all failures that are
|
||||
returned by the failing bulk request are returned in the `failures` element so
|
||||
it's possible for there to be quite a few.
|
||||
|
||||
If you want to simply count version conflicts not cause the `_update_by_query`
|
||||
to abort you can set `conflicts=proceed` on the url or `"conflicts": "proceed"`
|
||||
in the request body. The first example does this because it is just trying to
|
||||
pick up an online mapping change and a version conflict simply means that the
|
||||
conflicting document was updated between the start of the `_update_by_query`
|
||||
and the time when it attempted to update the document. This is fine because
|
||||
that update will have picked up the online mapping update.
|
||||
|
||||
Back to the API format, you can limit `_update_by_query` to a single type. This
|
||||
will only update `tweet`s from the `twitter` index:
|
||||
|
||||
[source,js]
|
||||
--------------------------------------------------
|
||||
POST /twitter/tweet/_update_by_query?conflicts=proceed
|
||||
--------------------------------------------------
|
||||
// AUTOSENSE
|
||||
|
||||
You can also limit `_update_by_query` using the
|
||||
{ref}/query-dsl.html[Query DSL]. This will update all documents from the
|
||||
`twitter` index for the user `kimchy`:
|
||||
|
||||
[source,js]
|
||||
--------------------------------------------------
|
||||
POST /twitter/_update_by_query?conflicts=proceed
|
||||
{
|
||||
"query": { <1>
|
||||
"term": {
|
||||
"user": "kimchy"
|
||||
}
|
||||
}
|
||||
}
|
||||
--------------------------------------------------
|
||||
// AUTOSENSE
|
||||
|
||||
<1> The query must be passed as a value to the `query` key, in the same
|
||||
way as the {ref}/search-search.html[Search API]. You can also use the `q`
|
||||
parameter in the same way as the search api.
|
||||
|
||||
So far we've only been updating documents without changing their source. That
|
||||
is genuinely useful for things like
|
||||
<<picking-up-a-new-property,picking up new properties>> but it's only half the
|
||||
fun. `_update_by_query` supports a `script` object to update the document. This
|
||||
will increment the `likes` field on all of kimchy's tweets:
|
||||
[source,js]
|
||||
--------------------------------------------------
|
||||
POST /twitter/_update_by_query
|
||||
{
|
||||
"script": {
|
||||
"inline": "ctx._source.likes++"
|
||||
},
|
||||
"query": {
|
||||
"term": {
|
||||
"user": "kimchy"
|
||||
}
|
||||
}
|
||||
}
|
||||
--------------------------------------------------
|
||||
// AUTOSENSE
|
||||
|
||||
Just as in {ref}/docs-update.html[Update API] you can set `ctx.op = "noop"` if
|
||||
your script decides that it doesn't have to make any changes. That will cause
|
||||
`_update_by_query` to omit that document from its updates. Setting `ctx.op` to
|
||||
anything else is an error. If you want to delete by a query you can use the
|
||||
<<plugins-delete-by-query,Delete by Query Plugin>> instead. Setting any other
|
||||
field in `ctx` is an error.
|
||||
|
||||
Note that we stopped specifying `conflicts=proceed`. In this case we want a
|
||||
version conflict to abort the process so we can handle the failure.
|
||||
|
||||
This API doesn't allow you to move the documents it touches, just modify their
|
||||
source. This is intentional! We've made no provisions for removing the document
|
||||
from its original location.
|
||||
|
||||
It's also possible to do this whole thing on multiple indexes and multiple
|
||||
types at once, just like the search API:
|
||||
|
||||
[source,js]
|
||||
--------------------------------------------------
|
||||
POST /twitter,blog/tweet,post/_update_by_query
|
||||
--------------------------------------------------
|
||||
// AUTOSENSE
|
||||
|
||||
If you provide `routing` then the routing is copied to the scroll query,
|
||||
limiting the process to the shards that match that routing value:
|
||||
|
||||
[source,js]
|
||||
--------------------------------------------------
|
||||
POST /twitter/_update_by_query?routing=1
|
||||
--------------------------------------------------
|
||||
// AUTOSENSE
|
||||
|
||||
By default `_update_by_query` uses scroll batches of 100. You can change the
|
||||
batch size with the `scroll_size` URL parameter:
|
||||
|
||||
[source,js]
|
||||
--------------------------------------------------
|
||||
POST /twitter/_update_by_query?scroll_size=1000
|
||||
--------------------------------------------------
|
||||
// AUTOSENSE
|
||||
|
||||
[float]
|
||||
=== URL Parameters
|
||||
|
||||
In addition to the standard parameters like `pretty`, the Update By Query API
|
||||
also supports `refresh`, `wait_for_completion`, `consistency`, and `timeout`.
|
||||
|
||||
Sending the `refresh` will update all shards in the index being updated when
|
||||
the request completes. This is different than the Index API's `refresh`
|
||||
parameter which causes just the shard that received the new data to be indexed.
|
||||
|
||||
If the request contains `wait_for_completion=false` then Elasticsearch will
|
||||
perform some preflight checks, launch the request, and then return a `task`
|
||||
which can be used with <<docs-update-by-query-task-api,Tasks APIs>> to cancel
|
||||
or get the status of the task. For now, once the request is finished the task
|
||||
is gone and the only place to look for the ultimate result of the task is in
|
||||
the Elasticsearch log file. This will be fixed soon.
|
||||
|
||||
`consistency` controls how many copies of a shard must respond to each write
|
||||
request. `timeout` controls how long each write request waits for unavailable
|
||||
shards to become available. Both work exactly how they work in the
|
||||
{ref}/docs-bulk.html[Bulk API].
|
||||
|
||||
`timeout` controls how long each batch waits for the target shard to become
|
||||
available. It works exactly how it works in the {ref}/docs-bulk.html[Bulk API].
|
||||
|
||||
[float]
|
||||
=== Response body
|
||||
|
||||
The JSON response looks like this:
|
||||
|
||||
[source,js]
|
||||
--------------------------------------------------
|
||||
{
|
||||
"took" : 639,
|
||||
"updated": 0,
|
||||
"batches": 1,
|
||||
"version_conflicts": 2,
|
||||
"failures" : [ ]
|
||||
}
|
||||
--------------------------------------------------
|
||||
|
||||
`took`::
|
||||
|
||||
The number of milliseconds from start to end of the whole operation.
|
||||
|
||||
`updated`::
|
||||
|
||||
The number of documents that were successfully updated.
|
||||
|
||||
`batches`::
|
||||
|
||||
The number of scroll responses pulled back by the the update by query.
|
||||
|
||||
`version_conflicts`::
|
||||
|
||||
The number of version conflicts that the update by query hit.
|
||||
|
||||
`failures`::
|
||||
|
||||
Array of all indexing failures. If this is non-empty then the request aborted
|
||||
because of those failures. See `conflicts` for how to prevent version conflicts
|
||||
from aborting the operation.
|
||||
|
||||
|
||||
[float]
|
||||
[[docs-update-by-query-task-api]]
|
||||
=== Works with the Task API
|
||||
|
||||
While Update By Query is running you can fetch their status using the
|
||||
{ref}/task/list.html[Task List APIs]:
|
||||
|
||||
[source,js]
|
||||
--------------------------------------------------
|
||||
POST /_tasks/?pretty&detailed=true&action=byquery
|
||||
--------------------------------------------------
|
||||
// AUTOSENSE
|
||||
|
||||
The responses looks like:
|
||||
|
||||
[source,js]
|
||||
--------------------------------------------------
|
||||
{
|
||||
"nodes" : {
|
||||
"r1A2WoRbTwKZ516z6NEs5A" : {
|
||||
"name" : "Tyrannus",
|
||||
"transport_address" : "127.0.0.1:9300",
|
||||
"host" : "127.0.0.1",
|
||||
"ip" : "127.0.0.1:9300",
|
||||
"attributes" : {
|
||||
"testattr" : "test",
|
||||
"portsfile" : "true"
|
||||
},
|
||||
"tasks" : [ {
|
||||
"node" : "r1A2WoRbTwKZ516z6NEs5A",
|
||||
"id" : 36619,
|
||||
"type" : "transport",
|
||||
"action" : "indices:data/write/update/byquery",
|
||||
"status" : { <1>
|
||||
"total" : 6154,
|
||||
"updated" : 3500,
|
||||
"created" : 0,
|
||||
"deleted" : 0,
|
||||
"batches" : 36,
|
||||
"version_conflicts" : 0,
|
||||
"noops" : 0
|
||||
},
|
||||
"description" : ""
|
||||
} ]
|
||||
}
|
||||
}
|
||||
}
|
||||
--------------------------------------------------
|
||||
|
||||
<1> this object contains the actual status. It is just like the response json
|
||||
with the important addition of the `total` field. `total` is the total number
|
||||
of operations that the reindex expects to perform. You can estimate the
|
||||
progress by adding the `updated`, `created`, and `deleted` fields. The request
|
||||
will finish when their sum is equal to the `total` field.
|
||||
|
||||
|
||||
[float]
|
||||
=== Examples
|
||||
|
||||
[[picking-up-a-new-property]]
|
||||
==== Pick up a new property
|
||||
|
||||
Say you created an index without dynamic mapping, filled it with data, and then
|
||||
added a mapping value to pick up more fields from the data:
|
||||
|
||||
[source,js]
|
||||
--------------------------------------------------
|
||||
PUT test
|
||||
{
|
||||
"mappings": {
|
||||
"test": {
|
||||
"dynamic": false, <1>
|
||||
"properties": {
|
||||
"text": {"type": "string"}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
POST test/test?refresh
|
||||
{
|
||||
"text": "words words",
|
||||
"flag": "bar"
|
||||
}'
|
||||
POST test/test?refresh
|
||||
{
|
||||
"text": "words words",
|
||||
"flag": "foo"
|
||||
}'
|
||||
PUT test/_mapping/test <2>
|
||||
{
|
||||
"properties": {
|
||||
"text": {"type": "string"},
|
||||
"flag": {"type": "string", "analyzer": "keyword"}
|
||||
}
|
||||
}
|
||||
--------------------------------------------------
|
||||
// AUTOSENSE
|
||||
|
||||
<1> This means that new fields won't be indexed, just stored in `_source`.
|
||||
|
||||
<2> This updates the mapping to add the new `flag` field. To pick up the new
|
||||
field you have to reindex all documents with it.
|
||||
|
||||
Searching for the data won't find anything:
|
||||
|
||||
[source,js]
|
||||
--------------------------------------------------
|
||||
POST test/_search?filter_path=hits.total
|
||||
{
|
||||
"query": {
|
||||
"match": {
|
||||
"flag": "foo"
|
||||
}
|
||||
}
|
||||
}
|
||||
--------------------------------------------------
|
||||
// AUTOSENSE
|
||||
|
||||
[source,js]
|
||||
--------------------------------------------------
|
||||
{
|
||||
"hits" : {
|
||||
"total" : 0
|
||||
}
|
||||
}
|
||||
--------------------------------------------------
|
||||
|
||||
But you can issue an `_update_by_query` request to pick up the new mapping:
|
||||
|
||||
[source,js]
|
||||
--------------------------------------------------
|
||||
POST test/_update_by_query?refresh&conflicts=proceed
|
||||
POST test/_search?filter_path=hits.total
|
||||
{
|
||||
"query": {
|
||||
"match": {
|
||||
"flag": "foo"
|
||||
}
|
||||
}
|
||||
}
|
||||
--------------------------------------------------
|
||||
// AUTOSENSE
|
||||
|
||||
[source,js]
|
||||
--------------------------------------------------
|
||||
{
|
||||
"hits" : {
|
||||
"total" : 1
|
||||
}
|
||||
}
|
||||
--------------------------------------------------
|
||||
|
||||
Hurray! You can do the exact same thing when adding a field to a multifield.
|
|
@ -19,5 +19,5 @@
|
|||
|
||||
esplugin {
|
||||
description 'The Reindex Plugin adds APIs to reindex from one index to another or update documents in place.'
|
||||
classname 'org.elasticsearch.plugin.reindex.ReindexPlugin'
|
||||
classname 'org.elasticsearch.index.reindex.ReindexPlugin'
|
||||
}
|
|
@ -17,7 +17,7 @@
|
|||
* under the License.
|
||||
*/
|
||||
|
||||
package org.elasticsearch.plugin.reindex;
|
||||
package org.elasticsearch.index.reindex;
|
||||
|
||||
import org.elasticsearch.action.ActionListener;
|
||||
import org.elasticsearch.action.admin.indices.refresh.RefreshRequest;
|
||||
|
@ -62,7 +62,7 @@ import static java.util.Collections.emptyList;
|
|||
import static java.util.Collections.unmodifiableList;
|
||||
import static org.elasticsearch.action.bulk.BackoffPolicy.exponentialBackoff;
|
||||
import static org.elasticsearch.common.unit.TimeValue.timeValueNanos;
|
||||
import static org.elasticsearch.plugin.reindex.AbstractBulkByScrollRequest.SIZE_ALL_MATCHES;
|
||||
import static org.elasticsearch.index.reindex.AbstractBulkByScrollRequest.SIZE_ALL_MATCHES;
|
||||
import static org.elasticsearch.rest.RestStatus.CONFLICT;
|
||||
import static org.elasticsearch.search.sort.SortBuilders.fieldSort;
|
||||
|
|
@ -17,7 +17,7 @@
|
|||
* under the License.
|
||||
*/
|
||||
|
||||
package org.elasticsearch.plugin.reindex;
|
||||
package org.elasticsearch.index.reindex;
|
||||
|
||||
import org.elasticsearch.action.ActionListener;
|
||||
import org.elasticsearch.action.bulk.BulkRequest;
|
|
@ -17,7 +17,7 @@
|
|||
* under the License.
|
||||
*/
|
||||
|
||||
package org.elasticsearch.plugin.reindex;
|
||||
package org.elasticsearch.index.reindex;
|
||||
|
||||
import org.elasticsearch.action.ActionRequest;
|
||||
import org.elasticsearch.action.ActionRequestValidationException;
|
|
@ -17,7 +17,7 @@
|
|||
* under the License.
|
||||
*/
|
||||
|
||||
package org.elasticsearch.plugin.reindex;
|
||||
package org.elasticsearch.index.reindex;
|
||||
|
||||
import org.elasticsearch.action.ActionRequest;
|
||||
import org.elasticsearch.action.ActionRequestValidationException;
|
|
@ -17,7 +17,7 @@
|
|||
* under the License.
|
||||
*/
|
||||
|
||||
package org.elasticsearch.plugin.reindex;
|
||||
package org.elasticsearch.index.reindex;
|
||||
|
||||
import org.elasticsearch.action.Action;
|
||||
import org.elasticsearch.action.ActionRequestBuilder;
|
|
@ -17,7 +17,7 @@
|
|||
* under the License.
|
||||
*/
|
||||
|
||||
package org.elasticsearch.plugin.reindex;
|
||||
package org.elasticsearch.index.reindex;
|
||||
|
||||
import org.elasticsearch.action.search.SearchRequest;
|
||||
import org.elasticsearch.common.Nullable;
|
|
@ -17,7 +17,7 @@
|
|||
* under the License.
|
||||
*/
|
||||
|
||||
package org.elasticsearch.plugin.reindex;
|
||||
package org.elasticsearch.index.reindex;
|
||||
|
||||
import org.elasticsearch.action.Action;
|
||||
import org.elasticsearch.action.ActionResponse;
|
|
@ -17,7 +17,7 @@
|
|||
* under the License.
|
||||
*/
|
||||
|
||||
package org.elasticsearch.plugin.reindex;
|
||||
package org.elasticsearch.index.reindex;
|
||||
|
||||
import org.elasticsearch.common.Nullable;
|
||||
import org.elasticsearch.common.io.stream.StreamInput;
|
|
@ -17,7 +17,7 @@
|
|||
* under the License.
|
||||
*/
|
||||
|
||||
package org.elasticsearch.plugin.reindex;
|
||||
package org.elasticsearch.index.reindex;
|
||||
|
||||
import org.elasticsearch.action.ActionResponse;
|
||||
import org.elasticsearch.action.bulk.BulkItemResponse.Failure;
|
|
@ -17,7 +17,7 @@
|
|||
* under the License.
|
||||
*/
|
||||
|
||||
package org.elasticsearch.plugin.reindex;
|
||||
package org.elasticsearch.index.reindex;
|
||||
|
||||
import org.elasticsearch.action.bulk.BulkItemResponse.Failure;
|
||||
import org.elasticsearch.rest.RestChannel;
|
|
@ -17,7 +17,7 @@
|
|||
* under the License.
|
||||
*/
|
||||
|
||||
package org.elasticsearch.plugin.reindex;
|
||||
package org.elasticsearch.index.reindex;
|
||||
|
||||
import org.elasticsearch.action.Action;
|
||||
import org.elasticsearch.client.ElasticsearchClient;
|
|
@ -17,7 +17,7 @@
|
|||
* under the License.
|
||||
*/
|
||||
|
||||
package org.elasticsearch.plugin.reindex;
|
||||
package org.elasticsearch.index.reindex;
|
||||
|
||||
import org.elasticsearch.action.ActionModule;
|
||||
import org.elasticsearch.common.network.NetworkModule;
|
||||
|
@ -33,7 +33,7 @@ public class ReindexPlugin extends Plugin {
|
|||
|
||||
@Override
|
||||
public String description() {
|
||||
return "The Reindex Plugin adds APIs to reindex from one index to another or update documents in place.";
|
||||
return "The Reindex module adds APIs to reindex from one index to another or update documents in place.";
|
||||
}
|
||||
|
||||
public void onModule(ActionModule actionModule) {
|
|
@ -17,7 +17,7 @@
|
|||
* under the License.
|
||||
*/
|
||||
|
||||
package org.elasticsearch.plugin.reindex;
|
||||
package org.elasticsearch.index.reindex;
|
||||
|
||||
import org.elasticsearch.action.ActionRequestValidationException;
|
||||
import org.elasticsearch.action.index.IndexRequest;
|
|
@ -17,7 +17,7 @@
|
|||
* under the License.
|
||||
*/
|
||||
|
||||
package org.elasticsearch.plugin.reindex;
|
||||
package org.elasticsearch.index.reindex;
|
||||
|
||||
import org.elasticsearch.action.Action;
|
||||
import org.elasticsearch.action.index.IndexAction;
|
|
@ -17,13 +17,13 @@
|
|||
* under the License.
|
||||
*/
|
||||
|
||||
package org.elasticsearch.plugin.reindex;
|
||||
package org.elasticsearch.index.reindex;
|
||||
|
||||
import org.elasticsearch.action.bulk.BulkItemResponse.Failure;
|
||||
import org.elasticsearch.action.search.ShardSearchFailure;
|
||||
import org.elasticsearch.common.unit.TimeValue;
|
||||
import org.elasticsearch.common.xcontent.XContentBuilder;
|
||||
import org.elasticsearch.plugin.reindex.BulkByScrollTask.Status;
|
||||
import org.elasticsearch.index.reindex.BulkByScrollTask.Status;
|
||||
|
||||
import java.io.IOException;
|
||||
import java.util.List;
|
|
@ -17,7 +17,7 @@
|
|||
* under the License.
|
||||
*/
|
||||
|
||||
package org.elasticsearch.plugin.reindex;
|
||||
package org.elasticsearch.index.reindex;
|
||||
|
||||
import org.elasticsearch.action.WriteConsistencyLevel;
|
||||
import org.elasticsearch.action.index.IndexRequest;
|
|
@ -17,7 +17,7 @@
|
|||
* under the License.
|
||||
*/
|
||||
|
||||
package org.elasticsearch.plugin.reindex;
|
||||
package org.elasticsearch.index.reindex;
|
||||
|
||||
import org.elasticsearch.action.search.SearchRequest;
|
||||
import org.elasticsearch.client.Client;
|
||||
|
@ -40,8 +40,8 @@ import org.elasticsearch.script.Script;
|
|||
|
||||
import java.util.Map;
|
||||
|
||||
import static org.elasticsearch.plugin.reindex.AbstractBulkByScrollRequest.SIZE_ALL_MATCHES;
|
||||
import static org.elasticsearch.plugin.reindex.RestReindexAction.parseCommon;
|
||||
import static org.elasticsearch.index.reindex.AbstractBulkByScrollRequest.SIZE_ALL_MATCHES;
|
||||
import static org.elasticsearch.index.reindex.RestReindexAction.parseCommon;
|
||||
import static org.elasticsearch.rest.RestRequest.Method.POST;
|
||||
|
||||
public class RestUpdateByQueryAction extends
|
|
@ -17,7 +17,7 @@
|
|||
* under the License.
|
||||
*/
|
||||
|
||||
package org.elasticsearch.plugin.reindex;
|
||||
package org.elasticsearch.index.reindex;
|
||||
|
||||
import org.elasticsearch.action.ActionListener;
|
||||
import org.elasticsearch.action.ActionRequestValidationException;
|
|
@ -17,7 +17,7 @@
|
|||
* under the License.
|
||||
*/
|
||||
|
||||
package org.elasticsearch.plugin.reindex;
|
||||
package org.elasticsearch.index.reindex;
|
||||
|
||||
import org.elasticsearch.action.ActionListener;
|
||||
import org.elasticsearch.action.bulk.BulkItemResponse.Failure;
|
|
@ -17,7 +17,7 @@
|
|||
* under the License.
|
||||
*/
|
||||
|
||||
package org.elasticsearch.plugin.reindex;
|
||||
package org.elasticsearch.index.reindex;
|
||||
|
||||
import org.elasticsearch.action.Action;
|
||||
import org.elasticsearch.client.ElasticsearchClient;
|
|
@ -17,7 +17,7 @@
|
|||
* under the License.
|
||||
*/
|
||||
|
||||
package org.elasticsearch.plugin.reindex;
|
||||
package org.elasticsearch.index.reindex;
|
||||
|
||||
import org.elasticsearch.action.search.SearchRequest;
|
||||
|
|
@ -17,7 +17,7 @@
|
|||
* under the License.
|
||||
*/
|
||||
|
||||
package org.elasticsearch.plugin.reindex;
|
||||
package org.elasticsearch.index.reindex;
|
||||
|
||||
import org.elasticsearch.action.Action;
|
||||
import org.elasticsearch.action.search.SearchAction;
|
|
@ -17,7 +17,7 @@
|
|||
* under the License.
|
||||
*/
|
||||
|
||||
package org.elasticsearch.plugin.reindex;
|
||||
package org.elasticsearch.index.reindex;
|
||||
|
||||
import org.elasticsearch.action.index.IndexRequest;
|
||||
import org.elasticsearch.common.text.Text;
|
|
@ -17,7 +17,7 @@
|
|||
* under the License.
|
||||
*/
|
||||
|
||||
package org.elasticsearch.plugin.reindex;
|
||||
package org.elasticsearch.index.reindex;
|
||||
|
||||
import org.elasticsearch.action.support.PlainActionFuture;
|
||||
import org.elasticsearch.test.ESTestCase;
|
|
@ -17,7 +17,7 @@
|
|||
* under the License.
|
||||
*/
|
||||
|
||||
package org.elasticsearch.plugin.reindex;
|
||||
package org.elasticsearch.index.reindex;
|
||||
|
||||
import org.elasticsearch.action.index.IndexRequest;
|
||||
import org.elasticsearch.common.text.Text;
|
|
@ -17,7 +17,7 @@
|
|||
* under the License.
|
||||
*/
|
||||
|
||||
package org.elasticsearch.plugin.reindex;
|
||||
package org.elasticsearch.index.reindex;
|
||||
|
||||
import org.hamcrest.Description;
|
||||
import org.hamcrest.Matcher;
|
|
@ -17,7 +17,7 @@
|
|||
* under the License.
|
||||
*/
|
||||
|
||||
package org.elasticsearch.plugin.reindex;
|
||||
package org.elasticsearch.index.reindex;
|
||||
|
||||
import org.elasticsearch.action.Action;
|
||||
import org.elasticsearch.action.ActionListener;
|
|
@ -17,7 +17,7 @@
|
|||
* under the License.
|
||||
*/
|
||||
|
||||
package org.elasticsearch.plugin.reindex;
|
||||
package org.elasticsearch.index.reindex;
|
||||
|
||||
import org.elasticsearch.test.ESTestCase;
|
||||
import org.junit.Before;
|
|
@ -17,7 +17,7 @@
|
|||
* under the License.
|
||||
*/
|
||||
|
||||
package org.elasticsearch.plugin.reindex;
|
||||
package org.elasticsearch.index.reindex;
|
||||
|
||||
import org.elasticsearch.action.ActionResponse;
|
||||
import org.elasticsearch.action.ListenableActionFuture;
|
|
@ -17,7 +17,7 @@
|
|||
* under the License.
|
||||
*/
|
||||
|
||||
package org.elasticsearch.plugin.reindex;
|
||||
package org.elasticsearch.index.reindex;
|
||||
|
||||
import org.elasticsearch.action.admin.indices.create.CreateIndexRequestBuilder;
|
||||
import org.elasticsearch.action.index.IndexRequestBuilder;
|
|
@ -17,7 +17,7 @@
|
|||
* under the License.
|
||||
*/
|
||||
|
||||
package org.elasticsearch.plugin.reindex;
|
||||
package org.elasticsearch.index.reindex;
|
||||
|
||||
import org.elasticsearch.plugins.Plugin;
|
||||
|
|
@ -17,7 +17,7 @@
|
|||
* under the License.
|
||||
*/
|
||||
|
||||
package org.elasticsearch.plugin.reindex;
|
||||
package org.elasticsearch.index.reindex;
|
||||
|
||||
import org.elasticsearch.action.ActionRequestValidationException;
|
||||
import org.elasticsearch.action.bulk.BulkItemResponse.Failure;
|
|
@ -17,7 +17,7 @@
|
|||
* under the License.
|
||||
*/
|
||||
|
||||
package org.elasticsearch.plugin.reindex;
|
||||
package org.elasticsearch.index.reindex;
|
||||
|
||||
import org.elasticsearch.action.index.IndexRequest;
|
||||
import org.elasticsearch.action.search.SearchRequest;
|
|
@ -17,7 +17,7 @@
|
|||
* under the License.
|
||||
*/
|
||||
|
||||
package org.elasticsearch.plugin.reindex;
|
||||
package org.elasticsearch.index.reindex;
|
||||
|
||||
import org.elasticsearch.action.admin.indices.create.CreateIndexRequestBuilder;
|
||||
import org.elasticsearch.index.query.QueryBuilder;
|
|
@ -17,7 +17,7 @@
|
|||
* under the License.
|
||||
*/
|
||||
|
||||
package org.elasticsearch.plugin.reindex;
|
||||
package org.elasticsearch.index.reindex;
|
||||
|
||||
import com.carrotsearch.randomizedtesting.annotations.Name;
|
||||
import com.carrotsearch.randomizedtesting.annotations.ParametersFactory;
|
|
@ -17,7 +17,7 @@
|
|||
* under the License.
|
||||
*/
|
||||
|
||||
package org.elasticsearch.plugin.reindex;
|
||||
package org.elasticsearch.index.reindex;
|
||||
|
||||
import org.elasticsearch.Version;
|
||||
import org.elasticsearch.action.ActionRequestValidationException;
|
|
@ -17,7 +17,7 @@
|
|||
* under the License.
|
||||
*/
|
||||
|
||||
package org.elasticsearch.plugin.reindex;
|
||||
package org.elasticsearch.index.reindex;
|
||||
|
||||
import org.elasticsearch.action.index.IndexRequest;
|
||||
import org.elasticsearch.common.lucene.uid.Versions;
|
|
@ -17,7 +17,7 @@
|
|||
* under the License.
|
||||
*/
|
||||
|
||||
package org.elasticsearch.plugin.reindex;
|
||||
package org.elasticsearch.index.reindex;
|
||||
|
||||
import org.elasticsearch.plugins.Plugin;
|
||||
import org.elasticsearch.test.ESIntegTestCase;
|
|
@ -17,7 +17,7 @@
|
|||
* under the License.
|
||||
*/
|
||||
|
||||
package org.elasticsearch.plugin.reindex;
|
||||
package org.elasticsearch.index.reindex;
|
||||
|
||||
import org.elasticsearch.action.get.GetResponse;
|
||||
|
|
@ -17,7 +17,7 @@
|
|||
* under the License.
|
||||
*/
|
||||
|
||||
package org.elasticsearch.plugin.reindex;
|
||||
package org.elasticsearch.index.reindex;
|
||||
|
||||
import org.elasticsearch.action.WriteConsistencyLevel;
|
||||
import org.elasticsearch.action.bulk.BulkItemResponse.Failure;
|
|
@ -17,7 +17,7 @@
|
|||
* under the License.
|
||||
*/
|
||||
|
||||
package org.elasticsearch.plugin.reindex;
|
||||
package org.elasticsearch.index.reindex;
|
||||
|
||||
import org.elasticsearch.script.ExecutableScript;
|
||||
|
|
@ -17,7 +17,7 @@
|
|||
* under the License.
|
||||
*/
|
||||
|
||||
package org.elasticsearch.plugin.reindex;
|
||||
package org.elasticsearch.index.reindex;
|
||||
|
||||
import org.elasticsearch.action.admin.indices.create.CreateIndexRequestBuilder;
|
||||
import org.elasticsearch.search.sort.SortOrder;
|
|
@ -17,7 +17,7 @@
|
|||
* under the License.
|
||||
*/
|
||||
|
||||
package org.elasticsearch.plugin.reindex;
|
||||
package org.elasticsearch.index.reindex;
|
||||
|
||||
import org.elasticsearch.plugins.Plugin;
|
||||
|
|
@ -17,7 +17,7 @@
|
|||
* under the License.
|
||||
*/
|
||||
|
||||
package org.elasticsearch.plugin.reindex;
|
||||
package org.elasticsearch.index.reindex;
|
||||
|
||||
import org.elasticsearch.action.index.IndexRequest;
|
||||
import org.elasticsearch.action.search.SearchRequest;
|
|
@ -17,7 +17,7 @@
|
|||
* under the License.
|
||||
*/
|
||||
|
||||
package org.elasticsearch.plugin.reindex;
|
||||
package org.elasticsearch.index.reindex;
|
||||
|
||||
import org.elasticsearch.plugins.Plugin;
|
||||
import org.elasticsearch.test.ESIntegTestCase;
|
|
@ -17,7 +17,7 @@
|
|||
* under the License.
|
||||
*/
|
||||
|
||||
package org.elasticsearch.plugin.reindex;
|
||||
package org.elasticsearch.index.reindex;
|
||||
|
||||
import org.elasticsearch.action.get.GetResponse;
|
||||
import org.elasticsearch.action.index.IndexRequestBuilder;
|
|
@ -17,7 +17,7 @@
|
|||
* under the License.
|
||||
*/
|
||||
|
||||
package org.elasticsearch.plugin.reindex;
|
||||
package org.elasticsearch.index.reindex;
|
||||
|
||||
import java.util.Date;
|
||||
import java.util.Map;
|
|
@ -22,6 +22,5 @@ apply plugin: 'elasticsearch.rest-test'
|
|||
integTest {
|
||||
cluster {
|
||||
systemProperty 'es.script.inline', 'true'
|
||||
plugin 'reindex', project(':plugins:reindex')
|
||||
}
|
||||
}
|
||||
|
|
|
@ -15,6 +15,7 @@ List projects = [
|
|||
'modules:lang-expression',
|
||||
'modules:lang-groovy',
|
||||
'modules:lang-mustache',
|
||||
'modules:reindex',
|
||||
'plugins:analysis-icu',
|
||||
'plugins:analysis-kuromoji',
|
||||
'plugins:analysis-phonetic',
|
||||
|
@ -32,7 +33,6 @@ List projects = [
|
|||
'plugins:mapper-attachments',
|
||||
'plugins:mapper-murmur3',
|
||||
'plugins:mapper-size',
|
||||
'plugins:reindex',
|
||||
'plugins:repository-azure',
|
||||
'plugins:repository-hdfs',
|
||||
'plugins:repository-s3',
|
||||
|
|
Loading…
Reference in New Issue