Renamed all AUTOSENSE snippets to CONSOLE (#18210)
This commit is contained in:
parent
2528934411
commit
3f594089c2
|
@ -3,13 +3,13 @@ Elasticsearch documentation build process.
|
|||
|
||||
See: https://github.com/elastic/docs
|
||||
|
||||
Snippets marked with `// AUTOSENSE` are automatically annotated with "VIEW IN
|
||||
Snippets marked with `// CONSOLE` are automatically annotated with "VIEW IN
|
||||
SENSE" in the documentation and are automatically tested by the command
|
||||
`gradle :docs:check`. By default `// AUTOSENSE` snippet runs as its own isolated
|
||||
`gradle :docs:check`. By default `// CONSOLE` snippet runs as its own isolated
|
||||
test. You can manipulate the test execution in the following ways:
|
||||
|
||||
* `// TEST`: Explicitly marks a snippet as a test. Snippets marked this way
|
||||
are tests even if they don't have `// AUTOSENSE`.
|
||||
are tests even if they don't have `// CONSOLE`.
|
||||
* `// TEST[s/foo/bar/]`: Replace `foo` with `bar` in the test. This should be
|
||||
used sparingly because it makes the test "lie". Sometimes, though, you can use
|
||||
it to make the tests more clear.
|
||||
|
@ -22,7 +22,7 @@ are tests even if they don't have `// AUTOSENSE`.
|
|||
tell the story of some use case because it merges the snippets (and thus the
|
||||
use case) into one big test.
|
||||
* `// TEST[skip:reason]`: Skip this test. Replace `reason` with the actual
|
||||
reason to skip the test. Snippets without `// TEST` or `// AUTOSENSE` aren't
|
||||
reason to skip the test. Snippets without `// TEST` or `// CONSOLE` aren't
|
||||
considered tests anyway but this is useful for explicitly documenting the
|
||||
reason why the test shouldn't be run.
|
||||
* `// TEST[setup:name]`: Run some setup code before running the snippet. This
|
||||
|
|
|
@ -31,7 +31,7 @@ task listSnippets(type: SnippetsTask) {
|
|||
|
||||
task listAutoSenseCandidates(type: SnippetsTask) {
|
||||
group 'Docs'
|
||||
description 'List snippets that probably should be marked // AUTOSENSE'
|
||||
description 'List snippets that probably should be marked // CONSOLE'
|
||||
perSnippet {
|
||||
if (
|
||||
it.autoSense // Already marked, nothing to do
|
||||
|
|
|
@ -81,7 +81,7 @@ PUT icu_sample
|
|||
}
|
||||
}
|
||||
--------------------------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
|
||||
<1> Uses the default `nfkc_cf` normalization.
|
||||
<2> Uses the customized `nfd_normalizer` token filter, which is set to use `nfc` normalization with decomposition.
|
||||
|
@ -113,7 +113,7 @@ PUT icu_sample
|
|||
}
|
||||
}
|
||||
--------------------------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
|
||||
===== Rules customization
|
||||
|
||||
|
@ -163,7 +163,7 @@ PUT icu_sample
|
|||
|
||||
POST icu_sample/_analyze?analyzer=my_analyzer&text=Elasticsearch. Wow!
|
||||
--------------------------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
|
||||
The above `analyze` request returns the following:
|
||||
|
||||
|
@ -230,7 +230,7 @@ PUT icu_sample
|
|||
}
|
||||
}
|
||||
--------------------------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
|
||||
<1> Uses the default `nfkc_cf` normalization.
|
||||
<2> Uses the customized `nfc_normalizer` token filter, which is set to use `nfc` normalization.
|
||||
|
@ -264,7 +264,7 @@ PUT icu_sample
|
|||
}
|
||||
}
|
||||
--------------------------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
|
||||
The ICU folding token filter already does Unicode normalization, so there is
|
||||
no need to use Normalize character or token filter as well.
|
||||
|
@ -305,7 +305,7 @@ PUT icu_sample
|
|||
}
|
||||
}
|
||||
--------------------------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
|
||||
[[analysis-icu-collation]]
|
||||
==== ICU Collation Token Filter
|
||||
|
@ -370,7 +370,7 @@ GET _search <3>
|
|||
}
|
||||
|
||||
--------------------------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
|
||||
<1> The `name` field uses the `standard` analyzer, and so support full text queries.
|
||||
<2> The `name.sort` field uses the `keyword` analyzer to preserve the name as
|
||||
|
@ -494,7 +494,7 @@ GET icu_sample/_analyze?analyzer=latin
|
|||
}
|
||||
|
||||
--------------------------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
|
||||
<1> This transforms transliterates characters to Latin, and separates accents
|
||||
from their base characters, removes the accents, and then puts the
|
||||
|
|
|
@ -174,7 +174,7 @@ PUT kuromoji_sample
|
|||
|
||||
POST kuromoji_sample/_analyze?analyzer=my_analyzer&text=東京スカイツリー
|
||||
--------------------------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
|
||||
The above `analyze` request returns the following:
|
||||
|
||||
|
@ -226,7 +226,7 @@ PUT kuromoji_sample
|
|||
|
||||
POST kuromoji_sample/_analyze?analyzer=my_analyzer&text=飲み
|
||||
--------------------------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
|
||||
[source,text]
|
||||
--------------------------------------------------
|
||||
|
@ -285,7 +285,7 @@ PUT kuromoji_sample
|
|||
POST kuromoji_sample/_analyze?analyzer=my_analyzer&text=寿司がおいしいね
|
||||
|
||||
--------------------------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
|
||||
[source,text]
|
||||
--------------------------------------------------
|
||||
|
@ -359,7 +359,7 @@ POST kuromoji_sample/_analyze?analyzer=katakana_analyzer&text=寿司 <1>
|
|||
POST kuromoji_sample/_analyze?analyzer=romaji_analyzer&text=寿司 <2>
|
||||
|
||||
--------------------------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
|
||||
<1> Returns `スシ`.
|
||||
<2> Returns `sushi`.
|
||||
|
@ -410,7 +410,7 @@ POST kuromoji_sample/_analyze?analyzer=my_analyzer&text=コピー <1>
|
|||
POST kuromoji_sample/_analyze?analyzer=my_analyzer&text=サーバー <2>
|
||||
|
||||
--------------------------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
|
||||
<1> Returns `コピー`.
|
||||
<2> Return `サーバ`.
|
||||
|
@ -456,7 +456,7 @@ PUT kuromoji_sample
|
|||
|
||||
POST kuromoji_sample/_analyze?analyzer=my_analyzer&text=ストップは消える
|
||||
--------------------------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
|
||||
The above request returns:
|
||||
|
||||
|
@ -503,7 +503,7 @@ PUT kuromoji_sample
|
|||
POST kuromoji_sample/_analyze?analyzer=my_analyzer&text=一〇〇〇
|
||||
|
||||
--------------------------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
|
||||
[source,text]
|
||||
--------------------------------------------------
|
||||
|
|
|
@ -81,7 +81,7 @@ PUT phonetic_sample
|
|||
|
||||
POST phonetic_sample/_analyze?analyzer=my_analyzer&text=Joe Bloggs <1>
|
||||
--------------------------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
|
||||
<1> Returns: `J`, `joe`, `BLKS`, `bloggs`
|
||||
|
||||
|
|
|
@ -58,7 +58,7 @@ a parameter:
|
|||
--------------------------------------------------
|
||||
DELETE /twitter/tweet/_query?q=user:kimchy
|
||||
--------------------------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
|
||||
or using the {ref}/query-dsl.html[Query DSL] defined within the request body:
|
||||
|
||||
|
@ -73,7 +73,7 @@ DELETE /twitter/tweet/_query
|
|||
}
|
||||
}
|
||||
--------------------------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
|
||||
<1> The query must be passed as a value to the `query` key, in the same way as
|
||||
the {ref}/search-search.html[search api].
|
||||
|
|
|
@ -82,7 +82,7 @@ GET test/_search
|
|||
}
|
||||
}
|
||||
----
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
|
||||
[[lang-javascript-stored]]
|
||||
[float]
|
||||
|
@ -131,7 +131,7 @@ GET test/_search
|
|||
}
|
||||
|
||||
----
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
|
||||
<1> We store the script under the id `my_script`.
|
||||
<2> The function score query retrieves the script with id `my_script`.
|
||||
|
@ -187,7 +187,7 @@ GET test/_search
|
|||
}
|
||||
|
||||
----
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
|
||||
<1> The function score query retrieves the script with filename `my_script.javascript`.
|
||||
|
||||
|
|
|
@ -81,7 +81,7 @@ GET test/_search
|
|||
}
|
||||
}
|
||||
----
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
|
||||
[[lang-python-stored]]
|
||||
[float]
|
||||
|
@ -130,7 +130,7 @@ GET test/_search
|
|||
}
|
||||
|
||||
----
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
|
||||
<1> We store the script under the id `my_script`.
|
||||
<2> The function score query retrieves the script with id `my_script`.
|
||||
|
@ -186,7 +186,7 @@ GET test/_search
|
|||
}
|
||||
|
||||
----
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
|
||||
<1> The function score query retrieves the script with filename `my_script.py`.
|
||||
|
||||
|
|
|
@ -51,7 +51,7 @@ POST /trying-out-mapper-attachments
|
|||
"cv": { "type": "attachment" }
|
||||
}}}}
|
||||
--------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
|
||||
Index a new document populated with a `base64`-encoded attachment:
|
||||
|
||||
|
@ -62,7 +62,7 @@ POST /trying-out-mapper-attachments/person/1
|
|||
"cv": "e1xydGYxXGFuc2kNCkxvcmVtIGlwc3VtIGRvbG9yIHNpdCBhbWV0DQpccGFyIH0="
|
||||
}
|
||||
--------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
|
||||
Search for the document using words in the attachment:
|
||||
|
||||
|
@ -75,7 +75,7 @@ POST /trying-out-mapper-attachments/person/_search
|
|||
"query": "ipsum"
|
||||
}}}
|
||||
--------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
|
||||
If you get a hit for your indexed document, the plugin should be installed and working.
|
||||
|
||||
|
@ -96,7 +96,7 @@ PUT /test/person/_mapping
|
|||
}
|
||||
}
|
||||
--------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
|
||||
In this case, the JSON to index can be:
|
||||
|
||||
|
@ -107,7 +107,7 @@ PUT /test/person/1
|
|||
"my_attachment" : "... base64 encoded attachment ..."
|
||||
}
|
||||
--------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
|
||||
Or it is possible to use more elaborated JSON if content type, resource name or language need to be set explicitly:
|
||||
|
||||
|
@ -123,7 +123,7 @@ PUT /test/person/1
|
|||
}
|
||||
}
|
||||
--------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
|
||||
The `attachment` type not only indexes the content of the doc in `content` sub field, but also automatically adds meta
|
||||
data on the attachment as well (when available).
|
||||
|
@ -167,7 +167,7 @@ PUT /test/person/_mapping
|
|||
}
|
||||
}
|
||||
--------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
|
||||
In the above example, the actual content indexed is mapped under `fields` name `content`, and we decide not to index it, so
|
||||
it will only be available in the `_all` field. The other fields map to their respective metadata names, but there is no
|
||||
|
@ -201,7 +201,7 @@ PUT /test/person/_mapping
|
|||
}
|
||||
}
|
||||
--------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
|
||||
In this example, the extracted content will be copy as well to `copy` field.
|
||||
|
||||
|
@ -244,7 +244,7 @@ GET /test/person/_search
|
|||
}
|
||||
}
|
||||
--------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
|
||||
Will give you:
|
||||
|
||||
|
@ -296,7 +296,7 @@ PUT /test/person/1
|
|||
}
|
||||
}
|
||||
--------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
|
||||
[[mapper-attachments-error-handling]]
|
||||
==== Metadata parsing error handling
|
||||
|
@ -372,7 +372,7 @@ GET /test/person/_search
|
|||
}
|
||||
}
|
||||
--------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
|
||||
It gives back:
|
||||
|
||||
|
|
|
@ -58,7 +58,7 @@ PUT my_index
|
|||
}
|
||||
}
|
||||
--------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
|
||||
Such a mapping would allow to refer to `my_field.hash` in order to get hashes
|
||||
of the values of the `my_field` field. This is only useful in order to run
|
||||
|
@ -88,7 +88,7 @@ GET my_index/_search
|
|||
}
|
||||
}
|
||||
--------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
|
||||
<1> Counting unique values on the `my_field.hash` field
|
||||
|
||||
|
|
|
@ -50,7 +50,7 @@ PUT my_index
|
|||
}
|
||||
}
|
||||
--------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
|
||||
The value of the `_size` field is accessible in queries, aggregations, scripts,
|
||||
and when sorting:
|
||||
|
@ -99,7 +99,7 @@ GET my_index/_search
|
|||
}
|
||||
}
|
||||
--------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
|
||||
<1> Querying on the `_size` field
|
||||
<2> Aggregating on the `_size` field
|
||||
|
|
|
@ -167,7 +167,7 @@ PUT _snapshot/my_backup4
|
|||
}
|
||||
}
|
||||
----
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
|
||||
Example using Java:
|
||||
|
||||
|
|
|
@ -148,7 +148,7 @@ PUT _snapshot/my_s3_repository
|
|||
}
|
||||
}
|
||||
----
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
|
||||
The following settings are supported:
|
||||
|
||||
|
|
|
@ -77,4 +77,4 @@ PUT my_index
|
|||
}
|
||||
}
|
||||
----
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
|
|
|
@ -52,7 +52,7 @@ GET _cluster/health?wait_for_status=yellow
|
|||
GET test/_analyze?analyzer=whitespace&text=foo,bar baz
|
||||
# "foo,bar", "baz"
|
||||
--------------------------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
|
||||
[float]
|
||||
===== Non-word character tokenizer
|
||||
|
@ -81,7 +81,7 @@ GET test/_analyze?analyzer=nonword&text=foo,bar baz
|
|||
GET test/_analyze?analyzer=nonword&text=type_1-type_4
|
||||
# "type_1","type_4"
|
||||
--------------------------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
|
||||
|
||||
[float]
|
||||
|
@ -108,7 +108,7 @@ GET _cluster/health?wait_for_status=yellow
|
|||
GET test/_analyze?analyzer=camel&text=MooseX::FTPClass2_beta
|
||||
# "moose","x","ftp","class","2","beta"
|
||||
--------------------------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
|
||||
The regex above is easier to understand as:
|
||||
|
||||
|
|
|
@ -15,7 +15,7 @@ GET _tasks <1>
|
|||
GET _tasks?nodes=nodeId1,nodeId2 <2>
|
||||
GET _tasks?nodes=nodeId1,nodeId2&actions=cluster:* <3>
|
||||
--------------------------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
|
||||
<1> Retrieves all tasks currently running on all nodes in the cluster.
|
||||
<2> Retrieves all tasks running on nodes `nodeId1` and `nodeId2`. See <<cluster-nodes>> for more info about how to select individual nodes.
|
||||
|
@ -66,7 +66,7 @@ tasks using the following two commands:
|
|||
GET _tasks/taskId:1
|
||||
GET _tasks?parent_task_id=parentTaskId:1
|
||||
--------------------------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
|
||||
The task API can be also used to wait for completion of a particular task. The following call will
|
||||
block for 10 seconds or until the task with id `oTUltX4IQMOUUVeiohTt8A:12345` is completed.
|
||||
|
@ -75,7 +75,7 @@ block for 10 seconds or until the task with id `oTUltX4IQMOUUVeiohTt8A:12345` is
|
|||
--------------------------------------------------
|
||||
GET _tasks/oTUltX4IQMOUUVeiohTt8A:12345?wait_for_completion=true&timeout=10s
|
||||
--------------------------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
|
||||
Tasks can be also listed using _cat version of the list tasks command, which accepts the same arguments
|
||||
as the standard list tasks command.
|
||||
|
@ -84,7 +84,7 @@ as the standard list tasks command.
|
|||
--------------------------------------------------
|
||||
GET _cat/tasks
|
||||
--------------------------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
|
||||
[float]
|
||||
=== Task Cancellation
|
||||
|
@ -95,7 +95,7 @@ If a long-running task supports cancellation, it can be cancelled by the followi
|
|||
--------------------------------------------------
|
||||
POST _tasks/taskId:1/_cancel
|
||||
--------------------------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
|
||||
The task cancellation command supports the same task selection parameters as the list tasks command, so multiple tasks
|
||||
can be cancelled at the same time. For example, the following command will cancel all reindex tasks running on the
|
||||
|
@ -105,7 +105,7 @@ nodes `nodeId1` and `nodeId2`.
|
|||
--------------------------------------------------
|
||||
POST _tasks/_cancel?node_id=nodeId1,nodeId2&actions=*reindex
|
||||
--------------------------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
|
||||
|
||||
[float]
|
||||
|
@ -118,4 +118,4 @@ The following command will change the grouping to parent tasks:
|
|||
--------------------------------------------------
|
||||
GET _tasks?group_by=parents
|
||||
--------------------------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
|
|
|
@ -98,7 +98,7 @@ PUT twitter/tweet/1?version=2
|
|||
"message" : "elasticsearch now has versioning support, double cool!"
|
||||
}
|
||||
--------------------------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
// TEST[catch: conflict]
|
||||
|
||||
*NOTE:* versioning is completely real time, and is not affected by the
|
||||
|
@ -173,7 +173,7 @@ PUT twitter/tweet/1?op_type=create
|
|||
"message" : "trying out Elasticsearch"
|
||||
}
|
||||
--------------------------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
|
||||
Another option to specify `create` is to use the following uri:
|
||||
|
||||
|
@ -186,7 +186,7 @@ PUT twitter/tweet/1/_create
|
|||
"message" : "trying out Elasticsearch"
|
||||
}
|
||||
--------------------------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
|
||||
[float]
|
||||
=== Automatic ID Generation
|
||||
|
@ -205,7 +205,7 @@ POST twitter/tweet/
|
|||
"message" : "trying out Elasticsearch"
|
||||
}
|
||||
--------------------------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
|
||||
The result of the above index operation is:
|
||||
|
||||
|
@ -244,7 +244,7 @@ POST twitter/tweet?routing=kimchy
|
|||
"message" : "trying out Elasticsearch"
|
||||
}
|
||||
--------------------------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
|
||||
In the example above, the "tweet" document is routed to a shard based on
|
||||
the `routing` parameter provided: "kimchy".
|
||||
|
@ -282,7 +282,7 @@ PUT blogs/blog_tag/1122?parent=1111
|
|||
"tag" : "something"
|
||||
}
|
||||
--------------------------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
|
||||
When indexing a child document, the routing value is automatically set
|
||||
to be the same as its parent, unless the routing value is explicitly
|
||||
|
@ -306,7 +306,7 @@ PUT twitter/tweet/1?timestamp=2009-11-15T14:12:12
|
|||
"message" : "trying out Elasticsearch"
|
||||
}
|
||||
--------------------------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
|
||||
If the `timestamp` value is not provided externally or in the `_source`,
|
||||
the `timestamp` will be automatically set to the date the document was
|
||||
|
@ -337,7 +337,7 @@ PUT twitter/tweet/1?ttl=86400000ms
|
|||
"message": "Trying out elasticsearch, so far so good?"
|
||||
}
|
||||
--------------------------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
|
||||
[source,js]
|
||||
--------------------------------------------------
|
||||
|
@ -347,7 +347,7 @@ PUT twitter/tweet/1?ttl=1d
|
|||
"message": "Trying out elasticsearch, so far so good?"
|
||||
}
|
||||
--------------------------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
|
||||
More information can be found on the
|
||||
<<mapping-ttl-field,_ttl mapping page>>.
|
||||
|
@ -430,4 +430,4 @@ PUT twitter/tweet/1?timeout=5m
|
|||
"message" : "trying out Elasticsearch"
|
||||
}
|
||||
--------------------------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
|
|
|
@ -18,7 +18,7 @@ POST _reindex
|
|||
}
|
||||
}
|
||||
--------------------------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
// TEST[setup:big_twitter]
|
||||
|
||||
That will return something like this:
|
||||
|
@ -64,7 +64,7 @@ POST _reindex
|
|||
}
|
||||
}
|
||||
--------------------------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
// TEST[setup:twitter]
|
||||
|
||||
Setting `version_type` to `external` will cause Elasticsearch to preserve the
|
||||
|
@ -85,7 +85,7 @@ POST _reindex
|
|||
}
|
||||
}
|
||||
--------------------------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
// TEST[setup:twitter]
|
||||
|
||||
Settings `op_type` to `create` will cause `_reindex` to only create missing
|
||||
|
@ -105,7 +105,7 @@ POST _reindex
|
|||
}
|
||||
}
|
||||
--------------------------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
// TEST[setup:twitter]
|
||||
|
||||
By default version conflicts abort the `_reindex` process but you can just
|
||||
|
@ -125,7 +125,7 @@ POST _reindex
|
|||
}
|
||||
}
|
||||
--------------------------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
// TEST[setup:twitter]
|
||||
|
||||
You can limit the documents by adding a type to the `source` or by adding a
|
||||
|
@ -149,7 +149,7 @@ POST _reindex
|
|||
}
|
||||
}
|
||||
--------------------------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
// TEST[setup:twitter]
|
||||
|
||||
`index` and `type` in `source` can both be lists, allowing you to copy from
|
||||
|
@ -173,7 +173,7 @@ POST _reindex
|
|||
}
|
||||
}
|
||||
--------------------------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
// TEST[s/^/PUT twitter\nPUT blog\n/]
|
||||
|
||||
It's also possible to limit the number of processed documents by setting
|
||||
|
@ -193,7 +193,7 @@ POST _reindex
|
|||
}
|
||||
}
|
||||
--------------------------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
// TEST[setup:twitter]
|
||||
|
||||
If you want a particular set of documents from the twitter index you'll
|
||||
|
@ -215,7 +215,7 @@ POST _reindex
|
|||
}
|
||||
}
|
||||
--------------------------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
// TEST[setup:twitter]
|
||||
|
||||
Like `_update_by_query`, `_reindex` supports a script that modifies the
|
||||
|
@ -238,7 +238,7 @@ POST _reindex
|
|||
}
|
||||
}
|
||||
--------------------------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
// TEST[setup:twitter]
|
||||
|
||||
Think of the possibilities! Just be careful! With great power.... You can
|
||||
|
@ -298,7 +298,7 @@ POST _reindex
|
|||
}
|
||||
}
|
||||
--------------------------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
// TEST[s/^/PUT source\n/]
|
||||
|
||||
By default `_reindex` uses scroll batches of 100. You can change the
|
||||
|
@ -318,7 +318,7 @@ POST _reindex
|
|||
}
|
||||
}
|
||||
--------------------------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
// TEST[s/^/PUT source\n/]
|
||||
|
||||
Reindex can also use the <<ingest>> feature by specifying a
|
||||
|
@ -337,7 +337,7 @@ POST _reindex
|
|||
}
|
||||
}
|
||||
--------------------------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
// TEST[s/^/PUT source\n/]
|
||||
|
||||
[float]
|
||||
|
@ -437,7 +437,7 @@ While Reindex is running you can fetch their status using the
|
|||
--------------------------------------------------
|
||||
GET _tasks/?pretty&detailed=true&actions=*reindex
|
||||
--------------------------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
|
||||
The responses looks like:
|
||||
|
||||
|
@ -496,7 +496,7 @@ Any Reindex can be canceled using the <<tasks,Task Cancel API>>:
|
|||
--------------------------------------------------
|
||||
POST _tasks/taskid:1/_cancel
|
||||
--------------------------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
|
||||
The `task_id` can be found using the tasks API above.
|
||||
|
||||
|
@ -515,7 +515,7 @@ the `_rethrottle` API:
|
|||
--------------------------------------------------
|
||||
POST _reindex/taskid:1/_rethrottle?requests_per_second=unlimited
|
||||
--------------------------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
|
||||
The `task_id` can be found using the tasks API above.
|
||||
|
||||
|
@ -540,7 +540,7 @@ POST test/test/1?refresh&pretty
|
|||
"flag": "foo"
|
||||
}
|
||||
--------------------------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
|
||||
But you don't like the name `flag` and want to replace it with `tag`.
|
||||
`_reindex` can create the other index for you:
|
||||
|
@ -560,7 +560,7 @@ POST _reindex?pretty
|
|||
}
|
||||
}
|
||||
--------------------------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
// TEST[continued]
|
||||
|
||||
Now you can get the new document:
|
||||
|
@ -569,7 +569,7 @@ Now you can get the new document:
|
|||
--------------------------------------------------
|
||||
GET test2/test/1?pretty
|
||||
--------------------------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
// TEST[continued]
|
||||
|
||||
and it'll look like:
|
||||
|
|
|
@ -12,7 +12,7 @@ mapping change. Here is the API:
|
|||
--------------------------------------------------
|
||||
POST twitter/_update_by_query?conflicts=proceed
|
||||
--------------------------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
// TEST[setup:big_twitter]
|
||||
|
||||
That will return something like this:
|
||||
|
@ -64,7 +64,7 @@ will only update `tweet`s from the `twitter` index:
|
|||
--------------------------------------------------
|
||||
POST twitter/tweet/_update_by_query?conflicts=proceed
|
||||
--------------------------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
// TEST[setup:twitter]
|
||||
|
||||
You can also limit `_update_by_query` using the
|
||||
|
@ -82,7 +82,7 @@ POST twitter/_update_by_query?conflicts=proceed
|
|||
}
|
||||
}
|
||||
--------------------------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
// TEST[setup:twitter]
|
||||
|
||||
<1> The query must be passed as a value to the `query` key, in the same
|
||||
|
@ -109,7 +109,7 @@ POST twitter/_update_by_query
|
|||
}
|
||||
}
|
||||
--------------------------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
// TEST[setup:twitter]
|
||||
|
||||
Just as in <<docs-update,Update API>> you can set `ctx.op = "noop"` if
|
||||
|
@ -133,7 +133,7 @@ types at once, just like the search API:
|
|||
--------------------------------------------------
|
||||
POST twitter,blog/tweet,post/_update_by_query
|
||||
--------------------------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
// TEST[s/^/PUT twitter\nPUT blog\n/]
|
||||
|
||||
If you provide `routing` then the routing is copied to the scroll query,
|
||||
|
@ -143,7 +143,7 @@ limiting the process to the shards that match that routing value:
|
|||
--------------------------------------------------
|
||||
POST twitter/_update_by_query?routing=1
|
||||
--------------------------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
// TEST[setup:twitter]
|
||||
|
||||
By default `_update_by_query` uses scroll batches of 100. You can change the
|
||||
|
@ -153,7 +153,7 @@ batch size with the `scroll_size` URL parameter:
|
|||
--------------------------------------------------
|
||||
POST twitter/_update_by_query?scroll_size=1000
|
||||
--------------------------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
// TEST[setup:twitter]
|
||||
|
||||
`_update_by_query` can also use the <<ingest>> feature by
|
||||
|
@ -173,7 +173,7 @@ PUT _ingest/pipeline/set-foo
|
|||
}
|
||||
POST twitter/_update_by_query?pipeline=set-foo
|
||||
--------------------------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
// TEST[setup:twitter]
|
||||
|
||||
[float]
|
||||
|
@ -268,7 +268,7 @@ While Update By Query is running you can fetch their status using the
|
|||
--------------------------------------------------
|
||||
GET _tasks/?pretty&detailed=true&action=*byquery
|
||||
--------------------------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
|
||||
The responses looks like:
|
||||
|
||||
|
@ -327,7 +327,7 @@ Any Update By Query can be canceled using the <<tasks,Task Cancel API>>:
|
|||
--------------------------------------------------
|
||||
POST _tasks/taskid:1/_cancel
|
||||
--------------------------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
|
||||
The `task_id` can be found using the tasks API above.
|
||||
|
||||
|
@ -346,7 +346,7 @@ using the `_rethrottle` API:
|
|||
--------------------------------------------------
|
||||
POST _update_by_query/taskid:1/_rethrottle?requests_per_second=unlimited
|
||||
--------------------------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
|
||||
The `task_id` can be found using the tasks API above.
|
||||
|
||||
|
@ -396,7 +396,7 @@ PUT test/_mapping/test <2>
|
|||
}
|
||||
}
|
||||
--------------------------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
|
||||
<1> This means that new fields won't be indexed, just stored in `_source`.
|
||||
|
||||
|
@ -416,7 +416,7 @@ POST test/_search?filter_path=hits.total
|
|||
}
|
||||
}
|
||||
--------------------------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
// TEST[continued]
|
||||
|
||||
[source,js]
|
||||
|
@ -443,7 +443,7 @@ POST test/_search?filter_path=hits.total
|
|||
}
|
||||
}
|
||||
--------------------------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
// TEST[continued]
|
||||
|
||||
[source,js]
|
||||
|
|
|
@ -45,7 +45,7 @@ PUT _all/_settings
|
|||
}
|
||||
}
|
||||
------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
// TEST[s/^/PUT test\n/]
|
||||
|
||||
With delayed allocation enabled, the above scenario changes to look like this:
|
||||
|
@ -83,7 +83,7 @@ can be viewed with the <<cluster-health,cluster health API>>:
|
|||
------------------------------
|
||||
GET _cluster/health <1>
|
||||
------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
<1> This request will return a `delayed_unassigned_shards` value.
|
||||
|
||||
==== Removing a node permanently
|
||||
|
@ -101,7 +101,7 @@ PUT _all/_settings
|
|||
}
|
||||
}
|
||||
------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
// TEST[s/^/PUT test\n/]
|
||||
|
||||
You can reset the timeout as soon as the missing shards have started to recover.
|
||||
|
|
|
@ -31,7 +31,7 @@ PUT test/_settings
|
|||
"index.routing.allocation.include.size": "big,medium"
|
||||
}
|
||||
------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
// TEST[s/^/PUT test\n/]
|
||||
|
||||
Alternatively, we can move the index `test` away from the `small` nodes with
|
||||
|
@ -44,7 +44,7 @@ PUT test/_settings
|
|||
"index.routing.allocation.exclude.size": "small"
|
||||
}
|
||||
------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
// TEST[s/^/PUT test\n/]
|
||||
|
||||
Multiple rules can be specified, in which case all conditions must be
|
||||
|
@ -59,7 +59,7 @@ PUT test/_settings
|
|||
"index.routing.allocation.include.rack": "rack1"
|
||||
}
|
||||
------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
// TEST[s/^/PUT test\n/]
|
||||
|
||||
NOTE: If some conditions cannot be satisfied then shards will not be moved.
|
||||
|
@ -100,5 +100,5 @@ PUT test/_settings
|
|||
"index.routing.allocation.include._ip": "192.168.2.*"
|
||||
}
|
||||
------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
// TEST[skip:indexes don't assign]
|
||||
|
|
|
@ -33,7 +33,7 @@ PUT index_4
|
|||
}
|
||||
}
|
||||
------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
|
||||
In the above example:
|
||||
|
||||
|
@ -52,5 +52,5 @@ PUT index_4/_settings
|
|||
"index.priority": 1
|
||||
}
|
||||
------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
// TEST[continued]
|
||||
|
|
|
@ -121,7 +121,7 @@ GET _analyze
|
|||
"attributes" : ["keyword"] <1>
|
||||
}
|
||||
--------------------------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
<1> Set "keyword" to output "keyword" attribute only
|
||||
|
||||
coming[2.0.0, body based parameters were added in 2.0.0]
|
||||
|
|
|
@ -12,7 +12,7 @@ trigger flush operations as required in order to clear memory.
|
|||
--------------------------------------------------
|
||||
POST twitter/_flush
|
||||
--------------------------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
// TEST[setup:twitter]
|
||||
|
||||
[float]
|
||||
|
@ -45,7 +45,7 @@ POST kimchy,elasticsearch/_flush
|
|||
|
||||
POST _flush
|
||||
--------------------------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
// TEST[s/^/PUT kimchy\nPUT elasticsearch\n/]
|
||||
|
||||
[[indices-synced-flush]]
|
||||
|
@ -76,7 +76,7 @@ the <<indices-stats,indices stats>> API:
|
|||
--------------------------------------------------
|
||||
GET twitter/_stats/commit?level=shards
|
||||
--------------------------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
// TEST[s/^/PUT twitter\n/]
|
||||
|
||||
|
||||
|
@ -141,7 +141,7 @@ NOTE: It is harmless to request a synced flush while there is ongoing indexing.
|
|||
--------------------------------------------------
|
||||
POST twitter/_flush/synced
|
||||
--------------------------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
// TEST[setup:twitter]
|
||||
|
||||
The response contains details about how many shards were successfully sync-flushed and information about any failure.
|
||||
|
@ -238,4 +238,4 @@ POST kimchy,elasticsearch/_flush/synced
|
|||
|
||||
POST _flush/synced
|
||||
--------------------------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
|
|
|
@ -37,7 +37,7 @@ PUT twitter/_mapping/tweet <3>
|
|||
}
|
||||
}
|
||||
--------------------------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
<1> <<indices-create-index,Creates an index>> called `twitter` with the `message` field in the `tweet` <<mapping-type,mapping type>>.
|
||||
<2> Uses the PUT mapping API to add a new mapping type called `user`.
|
||||
<3> Uses the PUT mapping API to add a new field called `user_name` to the `tweet` mapping type.
|
||||
|
@ -115,7 +115,7 @@ PUT my_index/_mapping/user
|
|||
}
|
||||
}
|
||||
-----------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
<1> Create an index with a `first` field under the `name` <<object>> field, and a `user_id` field.
|
||||
<2> Add a `last` field under the `name` object field.
|
||||
<3> Update the `ignore_above` setting from its default of 0.
|
||||
|
@ -174,7 +174,7 @@ PUT my_index/_mapping/type_one <2>
|
|||
}
|
||||
}
|
||||
-----------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
// TEST[catch:request]
|
||||
<1> Create an index with two types, both of which contain a `text` field which have the same mapping.
|
||||
<2> Trying to update the `search_analyzer` just for `type_one` throws an exception like `"Merge failed with failures..."`.
|
||||
|
@ -194,6 +194,6 @@ PUT my_index/_mapping/type_one?update_all_types <1>
|
|||
}
|
||||
}
|
||||
-----------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
// TEST[continued]
|
||||
<1> Adding the `update_all_types` parameter updates the `text` field in `type_one` and `type_two`.
|
||||
|
|
|
@ -37,7 +37,7 @@ PUT _template/template_1
|
|||
}
|
||||
}
|
||||
--------------------------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
|
||||
Defines a template named template_1, with a template pattern of `te*`.
|
||||
The settings and mappings will be applied to any index name that matches
|
||||
|
|
|
@ -30,7 +30,7 @@ PUT my-index/my-type/my-id?pipeline=my_pipeline_id
|
|||
"foo": "bar"
|
||||
}
|
||||
--------------------------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
// TEST[catch:request]
|
||||
|
||||
See <<ingest-apis,Ingest APIs>> for more information about creating, adding, and deleting pipelines.
|
||||
|
|
|
@ -50,7 +50,7 @@ PUT _ingest/pipeline/my-pipeline-id
|
|||
]
|
||||
}
|
||||
--------------------------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
|
||||
NOTE: The put pipeline API also instructs all ingest nodes to reload their in-memory representation of pipelines, so that
|
||||
pipeline changes take effect immediately.
|
||||
|
@ -64,7 +64,7 @@ The get pipeline API returns pipelines based on ID. This API always returns a lo
|
|||
--------------------------------------------------
|
||||
GET _ingest/pipeline/my-pipeline-id
|
||||
--------------------------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
// TEST[continued]
|
||||
|
||||
Example response:
|
||||
|
@ -104,7 +104,7 @@ The delete pipeline API deletes pipelines by ID.
|
|||
--------------------------------------------------
|
||||
DELETE _ingest/pipeline/my-pipeline-id
|
||||
--------------------------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
// TEST[continued]
|
||||
|
||||
[[simulate-pipeline-api]]
|
||||
|
@ -189,7 +189,7 @@ POST _ingest/pipeline/_simulate
|
|||
]
|
||||
}
|
||||
--------------------------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
|
||||
Response:
|
||||
|
||||
|
@ -288,7 +288,7 @@ POST _ingest/pipeline/_simulate?verbose
|
|||
]
|
||||
}
|
||||
--------------------------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
|
||||
Response:
|
||||
|
||||
|
|
|
@ -156,7 +156,7 @@ PUT my_index <1>
|
|||
}
|
||||
}
|
||||
---------------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
<1> Create an index called `my_index`.
|
||||
<2> Add mapping types called `user` and `blogpost`.
|
||||
<3> Disable the `_all` <<mapping-fields,meta field>> for the `user` mapping type.
|
||||
|
|
|
@ -12,7 +12,7 @@ type, and fields will spring to life automatically:
|
|||
PUT data/counters/1 <1>
|
||||
{ "count": 5 }
|
||||
--------------------------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
<1> Creates the `data` index, the `counters` mapping type, and a field
|
||||
called `count` with datatype `long`.
|
||||
|
||||
|
@ -51,7 +51,7 @@ PUT data/_settings <1>
|
|||
"index.mapper.dynamic":false
|
||||
}
|
||||
--------------------------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
// TEST[continued]
|
||||
|
||||
<1> Disable automatic type creation for all indices.
|
||||
|
|
|
@ -27,7 +27,7 @@ PUT my_index
|
|||
}
|
||||
}
|
||||
--------------------------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
<1> The `_default_` mapping defaults the <<mapping-all-field,`_all`>> field to disabled.
|
||||
<2> The `user` type inherits the settings from `_default_`.
|
||||
<3> The `blogpost` type overrides the defaults and enables the <<mapping-all-field,`_all`>> field.
|
||||
|
@ -74,7 +74,7 @@ PUT _template/logging
|
|||
PUT logs-2015.10.01/event/1
|
||||
{ "message": "error:16" }
|
||||
--------------------------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
<1> The `logging` template will match any indices beginning with `logs-`.
|
||||
<2> Matching indices will be created with a single primary shard.
|
||||
<3> The `_all` field will be disabled by default for new type mappings.
|
||||
|
|
|
@ -68,7 +68,7 @@ PUT my_index/my_type/1
|
|||
|
||||
GET my_index/_mapping <1>
|
||||
--------------------------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
<1> The `create_date` field has been added as a <<date,`date`>>
|
||||
field with the <<mapping-date-format,`format`>>: +
|
||||
`"yyyy/MM/dd HH:mm:ss Z||yyyy/MM/dd Z"`.
|
||||
|
@ -93,7 +93,7 @@ PUT my_index/my_type/1 <1>
|
|||
"create": "2015/09/02"
|
||||
}
|
||||
--------------------------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
|
||||
<1> The `create_date` field has been added as a <<text,`text`>> field.
|
||||
|
||||
|
@ -118,7 +118,7 @@ PUT my_index/my_type/1
|
|||
"create_date": "09/25/2015"
|
||||
}
|
||||
--------------------------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
|
||||
|
||||
[[numeric-detection]]
|
||||
|
@ -147,7 +147,7 @@ PUT my_index/my_type/1
|
|||
"my_integer": "1" <2>
|
||||
}
|
||||
--------------------------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
<1> The `my_float` field is added as a <<number,`double`>> field.
|
||||
<2> The `my_integer` field is added as a <<number,`long`>> field.
|
||||
|
||||
|
|
|
@ -96,7 +96,7 @@ PUT my_index/my_type/1
|
|||
}
|
||||
|
||||
--------------------------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
<1> The `my_integer` field is mapped as an `integer`.
|
||||
<2> The `my_string` field is mapped as a `text`, with a `keyword` <<multi-fields,multi field>>.
|
||||
|
||||
|
@ -140,7 +140,7 @@ PUT my_index/my_type/1
|
|||
"long_text": "foo" <2>
|
||||
}
|
||||
--------------------------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
<1> The `long_num` field is mapped as a `long`.
|
||||
<2> The `long_text` field uses the default `string` mapping.
|
||||
|
||||
|
@ -198,7 +198,7 @@ PUT my_index/my_type/1
|
|||
}
|
||||
}
|
||||
--------------------------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
|
||||
[[template-variables]]
|
||||
==== `{name}` and `{dynamic_type}`
|
||||
|
@ -245,7 +245,7 @@ PUT my_index/my_type/1
|
|||
"count": 5 <2>
|
||||
}
|
||||
--------------------------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
<1> The `english` field is mapped as a `string` field with the `english` analyzer.
|
||||
<2> The `count` field is mapped as a `long` field with `doc_values` disabled
|
||||
|
||||
|
@ -417,7 +417,7 @@ PUT _template/disable_all_field
|
|||
}
|
||||
}
|
||||
--------------------------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
<1> Applies the mappings to an `index` which matches the pattern `*`, in other
|
||||
words, all new indices.
|
||||
<2> Defines the `_default_` type mapping types within the index.
|
||||
|
|
|
@ -28,7 +28,7 @@ GET my_index/_search
|
|||
}
|
||||
}
|
||||
--------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
<1> The `_all` field will contain the terms: [ `"john"`, `"smith"`, `"1970"`, `"10"`, `"24"` ]
|
||||
|
||||
[NOTE]
|
||||
|
@ -77,7 +77,7 @@ GET _search
|
|||
}
|
||||
}
|
||||
--------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
|
||||
The same goes for the `?q=` parameter in <<search-uri-request, URI search
|
||||
requests>> (which is rewritten to a `query_string` query internally):
|
||||
|
@ -115,7 +115,7 @@ PUT my_index
|
|||
}
|
||||
}
|
||||
--------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
// TEST[s/\.\.\.//]
|
||||
|
||||
<1> The `_all` field in `type_1` is enabled.
|
||||
|
@ -147,7 +147,7 @@ PUT my_index
|
|||
}
|
||||
}
|
||||
--------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
|
||||
<1> The `_all` field is disabled for the `my_type` type.
|
||||
<2> The `query_string` query will default to querying the `content` field in this index.
|
||||
|
@ -184,7 +184,7 @@ PUT myindex
|
|||
}
|
||||
}
|
||||
--------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
|
||||
<1> When querying the `_all` field, words that originated in the
|
||||
`title` field are twice as relevant as words that originated in
|
||||
|
@ -241,7 +241,7 @@ GET myindex/_search
|
|||
}
|
||||
}
|
||||
--------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
|
||||
<1> The `first_name` and `last_name` values are copied to the `full_name` field.
|
||||
|
||||
|
@ -296,7 +296,7 @@ GET _search
|
|||
}
|
||||
}
|
||||
--------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
|
||||
Of course, storing the `_all` field will use significantly more disk space
|
||||
and, because it is a combination of other fields, it may result in odd
|
||||
|
@ -344,7 +344,7 @@ GET _search
|
|||
}
|
||||
}
|
||||
--------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
|
||||
<1> The query inspects the `_all` field to find matching documents.
|
||||
<2> Highlighting is performed on the two name fields, which are available from the `_source`.
|
||||
|
|
|
@ -37,7 +37,7 @@ GET my_index/_search
|
|||
}
|
||||
|
||||
--------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
|
||||
<1> Querying on the `_field_names` field (also see the <<query-dsl-exists-query,`exists`>> query)
|
||||
<2> Accessing the `_field_names` field in scripts (inline scripts must be <<enable-dynamic-scripting,enabled>> for this example to work)
|
||||
|
|
|
@ -38,7 +38,7 @@ GET my_index/_search
|
|||
}
|
||||
}
|
||||
--------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
|
||||
<1> Querying on the `_id` field (also see the <<query-dsl-ids-query,`ids` query>>)
|
||||
<2> Accessing the `_id` field in scripts (inline scripts must be <<enable-dynamic-scripting,enabled>> for this example to work)
|
||||
|
|
|
@ -55,7 +55,7 @@ GET index_1,index_2/_search
|
|||
}
|
||||
}
|
||||
--------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
|
||||
<1> Querying on the `_index` field
|
||||
<2> Aggregating on the `_index` field
|
||||
|
|
|
@ -22,7 +22,7 @@ PUT my_index
|
|||
}
|
||||
}
|
||||
--------------------------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
<1> This `_meta` info can be retrieved with the
|
||||
<<indices-get-mapping,GET mapping>> API.
|
||||
|
||||
|
|
|
@ -47,7 +47,7 @@ GET my_index/my_parent/_search
|
|||
}
|
||||
}
|
||||
--------------------------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
<1> The `my_parent` type is parent to the `my_child` type.
|
||||
<2> Index a parent document.
|
||||
<3> Index two child documents, specifying the parent document's ID.
|
||||
|
@ -86,7 +86,7 @@ GET my_index/_search
|
|||
}
|
||||
}
|
||||
--------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
// TEST[continued]
|
||||
|
||||
<1> Querying on the `_parent` field (also see the <<query-dsl-has-parent-query,`has_parent` query>> and the <<query-dsl-has-child-query,`has_child` query>>)
|
||||
|
@ -138,7 +138,7 @@ PUT my_index
|
|||
}
|
||||
}
|
||||
--------------------------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
|
||||
The amount of heap used by global ordinals can be checked as follows:
|
||||
|
||||
|
@ -150,4 +150,4 @@ GET _stats/fielddata?human&fields=_parent
|
|||
# Per-node per-index
|
||||
GET _nodes/stats/indices/fielddata?human&fields=_parent
|
||||
--------------------------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
|
|
|
@ -21,7 +21,7 @@ PUT my_index/my_type/1?routing=user1 <1>
|
|||
|
||||
GET my_index/my_type/1?routing=user1 <2>
|
||||
------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
// TESTSETUP
|
||||
|
||||
<1> This document uses `user1` as its routing value, instead of its ID.
|
||||
|
@ -47,7 +47,7 @@ GET my_index/_search
|
|||
}
|
||||
}
|
||||
--------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
|
||||
<1> Querying on the `_routing` field (also see the <<query-dsl-ids-query,`ids` query>>)
|
||||
<2> Accessing the `_routing` field in scripts (inline scripts must be <<enable-dynamic-scripting,enabled>> for this example to work)
|
||||
|
@ -70,7 +70,7 @@ GET my_index/_search?routing=user1,user2 <1>
|
|||
}
|
||||
}
|
||||
------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
|
||||
<1> This search request will only be executed on the shards associated with the `user1` and `user2` routing values.
|
||||
|
||||
|
@ -103,7 +103,7 @@ PUT my_index2/my_type/1 <2>
|
|||
"text": "No routing value provided"
|
||||
}
|
||||
------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
// TEST[catch:request]
|
||||
<1> Routing is required for `my_type` documents.
|
||||
<2> This index request throws a `routing_missing_exception`.
|
||||
|
|
|
@ -24,7 +24,7 @@ PUT tweets
|
|||
}
|
||||
}
|
||||
--------------------------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
|
||||
[WARNING]
|
||||
.Think before disabling the `_source` field
|
||||
|
@ -130,7 +130,7 @@ GET logs/event/_search
|
|||
}
|
||||
}
|
||||
--------------------------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
|
||||
<1> These fields will be removed from the stored `_source` field.
|
||||
<2> We can still search on this field, even though it is not in the stored `_source`.
|
||||
|
|
|
@ -30,7 +30,7 @@ PUT my_index/my_type/3 <4>
|
|||
{ "text": "Autogenerated timestamp set to now()" }
|
||||
|
||||
------------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
|
||||
<1> Enable the `_timestamp` field with default settings.
|
||||
<2> Set the timestamp manually with a formatted date.
|
||||
|
@ -88,7 +88,7 @@ GET my_index/_search
|
|||
}
|
||||
}
|
||||
--------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
// TEST[continued]
|
||||
|
||||
<1> Querying on the `_timestamp` field
|
||||
|
|
|
@ -44,7 +44,7 @@ PUT my_index/my_type/2 <2>
|
|||
"text": "Will not expire"
|
||||
}
|
||||
-------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
<1> This document will expire 10 minutes after being indexed.
|
||||
<2> This document has no TTL set and will not expire.
|
||||
|
||||
|
@ -80,7 +80,7 @@ PUT my_index/my_type/2 <2>
|
|||
"text": "Will expire in 5 minutes"
|
||||
}
|
||||
-------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
<1> This document will expire 10 minutes after being indexed.
|
||||
<2> This document has no TTL set and so will expire after the default 5 minutes.
|
||||
|
||||
|
|
|
@ -35,7 +35,7 @@ GET my_index/type_*/_search
|
|||
}
|
||||
|
||||
--------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
|
||||
<1> Querying on the `_type` field
|
||||
<2> Accessing the `_type` field in scripts (inline scripts must be <<enable-dynamic-scripting,enabled>> for this example to work)
|
||||
|
|
|
@ -50,7 +50,7 @@ GET my_index/_search
|
|||
}
|
||||
}
|
||||
--------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
|
||||
<1> Querying on the `_uid` field (also see the <<query-dsl-ids-query,`ids` query>>)
|
||||
<2> Aggregating on the `_uid` field
|
||||
|
|
|
@ -72,7 +72,7 @@ GET my_index/_analyze?field=text.english <4>
|
|||
"text": "The quick Brown Foxes."
|
||||
}
|
||||
--------------------------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
<1> The `text` field uses the default `standard` analyzer`.
|
||||
<2> The `text.english` <<multi-fields,multi-field>> uses the `english` analyzer, which removes stop words and applies stemming.
|
||||
<3> This returns the tokens: [ `the`, `quick`, `brown`, `foxes` ].
|
||||
|
@ -136,7 +136,7 @@ PUT my_index
|
|||
}
|
||||
}
|
||||
--------------------------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
|
||||
[source,js]
|
||||
--------------------------------------------------
|
||||
|
|
|
@ -23,7 +23,7 @@ PUT my_index
|
|||
}
|
||||
}
|
||||
--------------------------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
|
||||
<1> Matches on the `title` field will have twice the weight as those on the
|
||||
`content` field, which has the default `boost` of `1.0`.
|
||||
|
@ -45,7 +45,7 @@ POST _search
|
|||
}
|
||||
}
|
||||
--------------------------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
|
||||
is equivalent to:
|
||||
|
||||
|
@ -63,7 +63,7 @@ POST _search
|
|||
}
|
||||
}
|
||||
--------------------------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
|
||||
|
||||
The boost is also applied when it is copied with the
|
||||
|
|
|
@ -44,7 +44,7 @@ PUT my_index/my_type/2
|
|||
"number_two": "10" <2>
|
||||
}
|
||||
--------------------------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
// TEST[catch:request]
|
||||
<1> The `number_one` field will contain the integer `10`.
|
||||
<2> This document will be rejected because coercion is disabled.
|
||||
|
@ -87,7 +87,7 @@ PUT my_index/my_type/1
|
|||
PUT my_index/my_type/2
|
||||
{ "number_two": "10" } <2>
|
||||
--------------------------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
// TEST[catch:request]
|
||||
<1> The `number_one` field overrides the index level setting to enable coercion.
|
||||
<2> This document will be rejected because the `number_two` field inherits the index-level coercion setting.
|
||||
|
|
|
@ -49,7 +49,7 @@ GET my_index/_search
|
|||
}
|
||||
|
||||
--------------------------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
<1> The values of the `first_name` and `last_name` fields are copied to the
|
||||
`full_name` field.
|
||||
|
||||
|
|
|
@ -40,7 +40,7 @@ PUT my_index
|
|||
}
|
||||
}
|
||||
--------------------------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
<1> The `status_code` field has `doc_values` enabled by default.
|
||||
<2> The `session_id` has `doc_values` disabled, but can still be queried.
|
||||
|
||||
|
|
|
@ -31,7 +31,7 @@ PUT my_index/my_type/2 <3>
|
|||
|
||||
GET my_index/_mapping <4>
|
||||
--------------------------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
<1> This document introduces the string field `username`, the object field
|
||||
`name`, and two string fields under the `name` object which can be
|
||||
referred to as `name.first` and `name.last`.
|
||||
|
@ -77,7 +77,7 @@ PUT my_index
|
|||
}
|
||||
}
|
||||
--------------------------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
<1> Dynamic mapping is disabled at the type level, so no new top-level fields will be added dynamically.
|
||||
<2> The `user` object inherits the type-level setting.
|
||||
<3> The `user.social_networks` object enables dynamic mapping, so new fields may be added to this inner object.
|
||||
|
|
|
@ -52,7 +52,7 @@ PUT my_index/session/session_2
|
|||
"last_updated": "2015-12-06T18:22:13"
|
||||
}
|
||||
--------------------------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
<1> The `session_data` field is disabled.
|
||||
<2> Any arbitrary data can be passed to the `session_data` field as it will be entirely ignored.
|
||||
<3> The `session_data` will also ignore values that are not JSON objects.
|
||||
|
@ -87,7 +87,7 @@ GET my_index/session/session_1 <2>
|
|||
|
||||
GET my_index/_mapping <3>
|
||||
--------------------------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
<1> The entire `session` mapping type is disabled.
|
||||
<2> The document can be retrieved.
|
||||
<3> Checking the mapping reveals that no fields have been added.
|
||||
|
|
|
@ -112,4 +112,4 @@ PUT my_index
|
|||
}
|
||||
}
|
||||
--------------------------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
|
|
|
@ -25,7 +25,7 @@ PUT my_index
|
|||
}
|
||||
}
|
||||
--------------------------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
|
||||
Many APIs which support date values also support <<date-math,date math>>
|
||||
expressions, such as `now-1m/d` -- the current time, minus one month, rounded
|
||||
|
|
|
@ -56,5 +56,5 @@ GET my_index/_search?fielddata_fields=location.geohash
|
|||
}
|
||||
|
||||
--------------------------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
<1> A `geohash_precision` of 6 equates to geohash cells of approximately 1.26km x 0.6km
|
||||
|
|
|
@ -60,5 +60,5 @@ GET my_index/_search?fielddata_fields=location.geohash
|
|||
}
|
||||
}
|
||||
--------------------------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
|
||||
|
|
|
@ -53,7 +53,7 @@ GET my_index/_search?fielddata_fields=location.geohash <2>
|
|||
}
|
||||
}
|
||||
--------------------------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
<1> A `location.geohash` field will be indexed for each geo-point.
|
||||
<2> The geohash can be retrieved with <<doc-values,`doc_values`>>.
|
||||
<3> A <<query-dsl-prefix-query,`prefix`>> query can find all geohashes which start with a particular prefix.
|
||||
|
|
|
@ -40,7 +40,7 @@ GET _search <4>
|
|||
}
|
||||
}
|
||||
--------------------------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
<1> This field will ignore any string longer than 20 characters.
|
||||
<2> This document is indexed successfully.
|
||||
<3> This document will be indexed, but without indexing the `message` field.
|
||||
|
|
|
@ -43,7 +43,7 @@ PUT my_index/my_type/2
|
|||
"number_two": "foo" <2>
|
||||
}
|
||||
--------------------------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
// TEST[catch:request]
|
||||
<1> This document will have the `text` field indexed, but not the `number_one` field.
|
||||
<2> This document will be rejected because `number_two` does not allow malformed values.
|
||||
|
@ -81,7 +81,7 @@ PUT my_index
|
|||
}
|
||||
}
|
||||
--------------------------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
|
||||
<1> The `number_one` field inherits the index-level setting.
|
||||
<2> The `number_two` field overrides the index-level setting to turn off `ignore_malformed`.
|
||||
|
|
|
@ -28,7 +28,7 @@ PUT my_index
|
|||
}
|
||||
}
|
||||
--------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
|
||||
<1> The `title` and `content` fields will be included in the `_all` field.
|
||||
<2> The `date` field will not be included in the `_all` field.
|
||||
|
@ -69,7 +69,7 @@ PUT my_index
|
|||
}
|
||||
}
|
||||
--------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
|
||||
<1> All fields in `my_type` are excluded from `_all`.
|
||||
<2> The `author.first_name` and `author.last_name` fields are included in `_all`.
|
||||
|
|
|
@ -66,5 +66,5 @@ GET my_index/_search
|
|||
}
|
||||
}
|
||||
--------------------------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
<1> The `text` field will use the postings highlighter by default because `offsets` are indexed.
|
||||
|
|
|
@ -51,7 +51,7 @@ GET my_index/_search
|
|||
}
|
||||
}
|
||||
--------------------------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
<1> Setting `lat_lon` to true indexes the geo-point in the `location.lat` and `location.lon` fields.
|
||||
<2> The `indexed` option tells the geo-distance query to use the inverted index instead of the in-memory calculation.
|
||||
|
||||
|
|
|
@ -55,7 +55,7 @@ GET my_index/_search
|
|||
}
|
||||
}
|
||||
--------------------------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
<1> The `city.raw` field is a `keyword` version of the `city` field.
|
||||
<2> The `city` field can be used for full text search.
|
||||
<3> The `city.raw` field can be used for sorting and aggregations
|
||||
|
@ -115,7 +115,7 @@ GET my_index/_search
|
|||
}
|
||||
}
|
||||
--------------------------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
|
||||
<1> The `text` field uses the `standard` analyzer.
|
||||
<2> The `text.english` field uses the `english` analyzer.
|
||||
|
|
|
@ -30,7 +30,7 @@ PUT my_index/_mapping/my_type
|
|||
}
|
||||
}
|
||||
------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
// TEST[s/^/PUT my_index\n/]
|
||||
|
||||
NOTE: Norms will not be removed instantly, but will be removed as old segments
|
||||
|
|
|
@ -43,7 +43,7 @@ GET my_index/_search
|
|||
}
|
||||
}
|
||||
--------------------------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
<1> Replace explicit `null` values with the term `NULL`.
|
||||
<2> An empty array does not contain an explicit `null`, and so won't be replaced with the `null_value`.
|
||||
<3> A query for `NULL` returns document 1, but not document 2.
|
||||
|
|
|
@ -41,7 +41,7 @@ GET my_index/groups/_search
|
|||
}
|
||||
}
|
||||
--------------------------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
<1> This phrase query doesn't match our document which is totally expected.
|
||||
<2> This phrase query matches our document, even though `Abraham` and `Lincoln`
|
||||
are in separate strings, because `slop` > `position_increment_gap`.
|
||||
|
@ -79,7 +79,7 @@ GET my_index/groups/_search
|
|||
}
|
||||
}
|
||||
--------------------------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
<1> The first term in the next array element will be 0 terms apart from the
|
||||
last term in the previous array element.
|
||||
<2> The phrase query matches our document which is weird, but its what we asked
|
||||
|
|
|
@ -57,7 +57,7 @@ PUT my_index/my_type/1 <4>
|
|||
]
|
||||
}
|
||||
--------------------------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
<1> Properties under the `my_type` mapping type.
|
||||
<2> Properties under the `manager` object field.
|
||||
<3> Properties under the `employees` nested field.
|
||||
|
@ -98,7 +98,7 @@ GET my_index/_search
|
|||
}
|
||||
}
|
||||
--------------------------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
// TEST[continued]
|
||||
|
||||
IMPORTANT: The full path to the inner field must be specified.
|
||||
|
|
|
@ -68,7 +68,7 @@ GET my_index/_search
|
|||
}
|
||||
|
||||
--------------------------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
|
||||
<1> Analysis settings to define the custom `autocomplete` analyzer.
|
||||
<2> The `text` field uses the `autocomplete` analyzer at index time, but the `standard` analyzer at search time.
|
||||
|
|
|
@ -48,7 +48,7 @@ PUT my_index
|
|||
}
|
||||
}
|
||||
--------------------------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
<1> The `default_field` uses the `classic` similarity (ie TF/IDF).
|
||||
<2> The `bm25_field` uses the `BM25` similarity.
|
||||
|
||||
|
|
|
@ -51,7 +51,7 @@ GET my_index/_search
|
|||
"fields": [ "title", "date" ] <2>
|
||||
}
|
||||
--------------------------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
<1> The `title` and `date` fields are stored.
|
||||
<2> This request will retrieve the values of the `title` and `date` fields.
|
||||
|
||||
|
|
|
@ -62,7 +62,7 @@ GET my_index/_search
|
|||
}
|
||||
}
|
||||
--------------------------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
<1> The fast vector highlighter will be used by default for the `text` field
|
||||
because term vectors are enabled.
|
||||
|
||||
|
|
|
@ -75,7 +75,7 @@ GET my_index/_search
|
|||
}
|
||||
}
|
||||
--------------------------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
<1> The `tags` field is dynamically added as a `string` field.
|
||||
<2> The `lists` field is dynamically added as an `object` field.
|
||||
<3> The second document contains no arrays, but can be indexed into the same fields.
|
||||
|
|
|
@ -44,7 +44,7 @@ GET my_index/_search
|
|||
}
|
||||
}
|
||||
--------------------------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
<1> Indexing a document with a JSON `true`.
|
||||
<2> Querying for the document with `1`, which is interpreted as `true`.
|
||||
|
||||
|
@ -81,7 +81,7 @@ GET my_index/_search
|
|||
}
|
||||
}
|
||||
--------------------------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
<1> Inline scripts must be <<enable-dynamic-scripting,enabled>> for this example to work.
|
||||
|
||||
[[boolean-params]]
|
||||
|
|
|
@ -50,7 +50,7 @@ GET my_index/_search
|
|||
"sort": { "date": "asc"} <5>
|
||||
}
|
||||
--------------------------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
<1> The `date` field uses the default `format`.
|
||||
<2> This document uses a plain date.
|
||||
<3> This document includes a time.
|
||||
|
@ -81,7 +81,7 @@ PUT my_index
|
|||
}
|
||||
}
|
||||
--------------------------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
|
||||
[[date-params]]
|
||||
==== Parameters for `date` fields
|
||||
|
|
|
@ -74,7 +74,7 @@ GET my_index/_search
|
|||
}
|
||||
}
|
||||
--------------------------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
<1> Geo-point expressed as an object, with `lat` and `lon` keys.
|
||||
<2> Geo-point expressed as a string with the format: `"lat,lon"`.
|
||||
<3> Geo-point expressed as a geohash.
|
||||
|
|
|
@ -33,7 +33,7 @@ GET my_index/_search
|
|||
}
|
||||
}
|
||||
--------------------------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
|
||||
|
||||
[[ip-params]]
|
||||
|
|
|
@ -28,7 +28,7 @@ PUT my_index
|
|||
}
|
||||
}
|
||||
--------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
|
||||
[[keyword-params]]
|
||||
==== Parameters for keyword fields
|
||||
|
|
|
@ -29,7 +29,7 @@ PUT my_index/my_type/1
|
|||
]
|
||||
}
|
||||
--------------------------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
<1> The `user` field is dynamically added as a field of type `object`.
|
||||
|
||||
would be transformed internally into a document that looks more like this:
|
||||
|
@ -61,7 +61,7 @@ GET my_index/_search
|
|||
}
|
||||
}
|
||||
--------------------------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
// TEST[continued]
|
||||
|
||||
==== Using `nested` fields for arrays of objects
|
||||
|
@ -143,7 +143,7 @@ GET my_index/_search
|
|||
}
|
||||
}
|
||||
--------------------------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
<1> The `user` field is mapped as type `nested` instead of type `object`.
|
||||
<2> This query doesn't match because `Alice` and `Smith` are not in the same nested object.
|
||||
<3> This query matches because `Alice` and `White` are in the same nested object.
|
||||
|
|
|
@ -31,7 +31,7 @@ PUT my_index
|
|||
}
|
||||
}
|
||||
--------------------------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
|
||||
[[number-params]]
|
||||
==== Parameters for numeric fields
|
||||
|
|
|
@ -18,7 +18,7 @@ PUT my_index/my_type/1
|
|||
}
|
||||
}
|
||||
--------------------------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
<1> The outer document is also a JSON object.
|
||||
<2> It contains an inner object called `manager`.
|
||||
<3> Which in turn contains an inner object called `name`.
|
||||
|
@ -64,7 +64,7 @@ PUT my_index
|
|||
}
|
||||
}
|
||||
--------------------------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
<1> The mapping type is a type of object, and has a `properties` field.
|
||||
<2> The `manager` field is an inner `object` field.
|
||||
<3> The `manager.name` field is an inner `object` field within the `manager` field.
|
||||
|
|
|
@ -30,7 +30,7 @@ PUT my_index
|
|||
}
|
||||
}
|
||||
--------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
|
||||
Sometimes it is useful to have both a full text (`text`) and a keyword
|
||||
(`keyword`) version of the same field: one for full text search and the
|
||||
|
|
|
@ -43,7 +43,7 @@ GET my_index/_search
|
|||
}
|
||||
}
|
||||
--------------------------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
<1> The `name` field is an analyzed string field which uses the default `standard` analyzer.
|
||||
<2> The `name.length` field is a `token_count` <<multi-fields,multi-field>> which will index the number of tokens in the `name` field.
|
||||
<3> This query matches only the document containing `Rachel Alice Williams`, as it contains three tokens.
|
||||
|
|
|
@ -71,7 +71,7 @@ PUT my_index
|
|||
}
|
||||
}
|
||||
--------------------------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
|
||||
Also the `precision_step` parameter is now irrelevant and will be rejected on
|
||||
indices that are created on or after 5.0.
|
||||
|
|
|
@ -21,7 +21,7 @@ PUT _cluster/settings
|
|||
}
|
||||
}
|
||||
--------------------------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
|
||||
NOTE: Shards will only be relocated if it is possible to do so without
|
||||
breaking another routing constraint, such as never allocating a primary and
|
||||
|
@ -66,5 +66,5 @@ PUT _cluster/settings
|
|||
}
|
||||
}
|
||||
------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
// TEST[skip:indexes don't assign]
|
||||
|
|
|
@ -65,7 +65,7 @@ PUT _cluster/settings
|
|||
}
|
||||
}
|
||||
--------------------------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
|
||||
NOTE: Prior to 2.0.0, when using multiple data paths, the disk threshold
|
||||
decider only factored in the usage across all data paths (if you had two
|
||||
|
|
|
@ -157,7 +157,7 @@ PUT _cluster/settings
|
|||
}
|
||||
}
|
||||
----------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
// TEST[catch:/cannot set discovery.zen.minimum_master_nodes to more than the current master nodes/]
|
||||
|
||||
TIP: An advantage of splitting the master and data roles between dedicated
|
||||
|
|
|
@ -79,7 +79,7 @@ GET my_index/_search
|
|||
}
|
||||
}
|
||||
-------------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
|
||||
|
||||
[float]
|
||||
|
@ -113,7 +113,7 @@ GET my_index/_search
|
|||
}
|
||||
}
|
||||
-------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
|
||||
Doc-values can only return "simple" field values like numbers, dates, geo-
|
||||
points, terms, etc, or arrays of these values if the field is multi-valued.
|
||||
|
@ -211,7 +211,7 @@ GET my_index/_search
|
|||
}
|
||||
}
|
||||
-------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
<1> The `title` field is not stored and so cannot be used with the `_fields[]` syntax.
|
||||
<2> The `title` field can still be accessed from the `_source`.
|
||||
|
||||
|
|
|
@ -66,7 +66,7 @@ PUT hockey/player/_bulk?refresh
|
|||
{"index":{"_id":11}}
|
||||
{"first":"joe","last":"colborne","goals":[3,18,13],"assists":[6,20,24],"gp":[26,67,82]}
|
||||
----------------------------------------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
// TESTSETUP
|
||||
|
||||
[float]
|
||||
|
@ -92,7 +92,7 @@ GET hockey/_search
|
|||
}
|
||||
}
|
||||
----------------------------------------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
|
||||
Alternatively, you could do the same thing using a script field instead of a function score:
|
||||
|
||||
|
@ -113,7 +113,7 @@ GET hockey/_search
|
|||
}
|
||||
}
|
||||
----------------------------------------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
|
||||
The following example uses a Painless script to sort the players by their combined first and last names. The names are accessed using
|
||||
`input.doc['first'].value` and `input.doc['last'].value`.
|
||||
|
@ -137,7 +137,7 @@ GET hockey/_search
|
|||
}
|
||||
}
|
||||
----------------------------------------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
|
||||
[float]
|
||||
=== Updating Fields with Painless
|
||||
|
@ -161,7 +161,7 @@ GET hockey/_search
|
|||
}
|
||||
}
|
||||
----------------------------------------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
|
||||
To change player 1's last name to `hockey`, simply set `input.ctx._source.last` to the new value:
|
||||
|
||||
|
@ -178,7 +178,7 @@ POST hockey/player/1/_update
|
|||
}
|
||||
}
|
||||
----------------------------------------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
|
||||
You can also add fields to a document. For example, this script adds a new field that contains
|
||||
the player's nickname, _hockey_.
|
||||
|
@ -197,7 +197,7 @@ POST hockey/player/1/_update
|
|||
}
|
||||
}
|
||||
----------------------------------------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
|
||||
[float]
|
||||
=== Writing Type-Safe Scripts to Improve Performance
|
||||
|
@ -229,7 +229,7 @@ GET hockey/_search
|
|||
}
|
||||
}
|
||||
----------------------------------------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
|
||||
[[painless-api]]
|
||||
[float]
|
||||
|
|
|
@ -41,7 +41,7 @@ GET my_index/_search
|
|||
}
|
||||
}
|
||||
-------------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
|
||||
|
||||
[float]
|
||||
|
@ -191,7 +191,7 @@ POST _scripts/groovy/calculate-score
|
|||
"script": "log(_score * 2) + my_modifier"
|
||||
}
|
||||
-----------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
|
||||
This same script can be retrieved with:
|
||||
|
||||
|
@ -199,7 +199,7 @@ This same script can be retrieved with:
|
|||
-----------------------------------
|
||||
GET _scripts/groovy/calculate-score
|
||||
-----------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
// TEST[continued]
|
||||
|
||||
Stored scripts can be used by specifying the `lang` and `id` parameters as follows:
|
||||
|
@ -221,7 +221,7 @@ GET _search
|
|||
}
|
||||
}
|
||||
--------------------------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
// TEST[continued]
|
||||
|
||||
And deleted with:
|
||||
|
@ -230,7 +230,7 @@ And deleted with:
|
|||
-----------------------------------
|
||||
DELETE _scripts/groovy/calculate-score
|
||||
-----------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
// TEST[continued]
|
||||
|
||||
NOTE: The size of stored scripts is limited to 65,535 bytes. This can be
|
||||
|
|
|
@ -28,7 +28,7 @@ Once a repository is registered, its information can be obtained using the follo
|
|||
-----------------------------------
|
||||
GET /_snapshot/my_backup
|
||||
-----------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
|
||||
which returns:
|
||||
|
||||
|
@ -180,7 +180,7 @@ PUT /_snapshot/s3_repository?verify=false
|
|||
}
|
||||
}
|
||||
-----------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
|
||||
The verification process can also be executed manually by running the following command:
|
||||
|
||||
|
@ -188,7 +188,7 @@ The verification process can also be executed manually by running the following
|
|||
-----------------------------------
|
||||
POST /_snapshot/s3_repository/_verify
|
||||
-----------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
|
||||
It returns a list of nodes where repository was successfully verified or an error message if verification process failed.
|
||||
|
||||
|
@ -203,7 +203,7 @@ command:
|
|||
-----------------------------------
|
||||
PUT /_snapshot/my_backup/snapshot_1?wait_for_completion=true
|
||||
-----------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
|
||||
The `wait_for_completion` parameter specifies whether or not the request should return immediately after snapshot
|
||||
initialization (default) or wait for snapshot completion. During snapshot initialization, information about all
|
||||
|
@ -222,7 +222,7 @@ PUT /_snapshot/my_backup/snapshot_1
|
|||
"include_global_state": false
|
||||
}
|
||||
-----------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
|
||||
The list of indices that should be included into the snapshot can be specified using the `indices` parameter that
|
||||
supports <<search-multi-index-type,multi index syntax>>. The snapshot request also supports the
|
||||
|
@ -258,7 +258,7 @@ Once a snapshot is created information about this snapshot can be obtained using
|
|||
-----------------------------------
|
||||
GET /_snapshot/my_backup/snapshot_1
|
||||
-----------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
|
||||
Similar as for repositories, information about multiple snapshots can be queried in one go, supporting wildcards as well:
|
||||
|
||||
|
@ -266,7 +266,7 @@ Similar as for repositories, information about multiple snapshots can be queried
|
|||
-----------------------------------
|
||||
GET /_snapshot/my_backup/snapshot_*,some_other_snapshot
|
||||
-----------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
|
||||
All snapshots currently stored in the repository can be listed using the following command:
|
||||
|
||||
|
@ -274,7 +274,7 @@ All snapshots currently stored in the repository can be listed using the followi
|
|||
-----------------------------------
|
||||
GET /_snapshot/my_backup/_all
|
||||
-----------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
|
||||
The command fails if some of the snapshots are unavailable. The boolean parameter `ignore_unvailable` can be used to
|
||||
return all snapshots that are currently available.
|
||||
|
@ -292,7 +292,7 @@ A snapshot can be deleted from the repository using the following command:
|
|||
-----------------------------------
|
||||
DELETE /_snapshot/my_backup/snapshot_1
|
||||
-----------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
|
||||
When a snapshot is deleted from a repository, Elasticsearch deletes all files that are associated with the deleted
|
||||
snapshot and not used by any other snapshots. If the deleted snapshot operation is executed while the snapshot is being
|
||||
|
@ -306,7 +306,7 @@ A repository can be deleted using the following command:
|
|||
-----------------------------------
|
||||
DELETE /_snapshot/my_backup
|
||||
-----------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
|
||||
When a repository is deleted, Elasticsearch only removes the reference to the location where the repository is storing
|
||||
the snapshots. The snapshots themselves are left untouched and in place.
|
||||
|
@ -320,7 +320,7 @@ A snapshot can be restored using the following command:
|
|||
-----------------------------------
|
||||
POST /_snapshot/my_backup/snapshot_1/_restore
|
||||
-----------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
|
||||
By default, all indices in the snapshot as well as cluster state are restored. It's possible to select indices that
|
||||
should be restored as well as prevent global cluster state from being restored by using `indices` and
|
||||
|
@ -341,7 +341,7 @@ POST /_snapshot/my_backup/snapshot_1/_restore
|
|||
"rename_replacement": "restored_index_$1"
|
||||
}
|
||||
-----------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
|
||||
The restore operation can be performed on a functioning cluster. However, an existing index can be only restored if it's
|
||||
<<indices-open-close,closed>> and has the same number of shards as the index in the snapshot.
|
||||
|
@ -378,7 +378,7 @@ POST /_snapshot/my_backup/snapshot_1/_restore
|
|||
]
|
||||
}
|
||||
-----------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
|
||||
Please note, that some settings such as `index.number_of_shards` cannot be changed during restore operation.
|
||||
|
||||
|
@ -413,7 +413,7 @@ A list of currently running snapshots with their detailed status information can
|
|||
-----------------------------------
|
||||
GET /_snapshot/_status
|
||||
-----------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
|
||||
In this format, the command will return information about all currently running snapshots. By specifying a repository name, it's possible
|
||||
to limit the results to a particular repository:
|
||||
|
@ -422,7 +422,7 @@ to limit the results to a particular repository:
|
|||
-----------------------------------
|
||||
GET /_snapshot/my_backup/_status
|
||||
-----------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
|
||||
If both repository name and snapshot id are specified, this command will return detailed status information for the given snapshot even
|
||||
if it's not currently running:
|
||||
|
@ -431,7 +431,7 @@ if it's not currently running:
|
|||
-----------------------------------
|
||||
GET /_snapshot/my_backup/snapshot_1/_status
|
||||
-----------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
|
||||
Multiple ids are also supported:
|
||||
|
||||
|
@ -439,7 +439,7 @@ Multiple ids are also supported:
|
|||
-----------------------------------
|
||||
GET /_snapshot/my_backup/snapshot_1,snapshot_2/_status
|
||||
-----------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
|
||||
[float]
|
||||
=== Monitoring snapshot/restore progress
|
||||
|
@ -454,7 +454,7 @@ The snapshot operation can be also monitored by periodic calls to the snapshot i
|
|||
-----------------------------------
|
||||
GET /_snapshot/my_backup/snapshot_1
|
||||
-----------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
|
||||
Please note that snapshot info operation uses the same resources and thread pool as the snapshot operation. So,
|
||||
executing a snapshot info operation while large shards are being snapshotted can cause the snapshot info operation to wait
|
||||
|
@ -466,7 +466,7 @@ To get more immediate and complete information about snapshots the snapshot stat
|
|||
-----------------------------------
|
||||
GET /_snapshot/my_backup/snapshot_1/_status
|
||||
-----------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
|
||||
While snapshot info method returns only basic information about the snapshot in progress, the snapshot status returns
|
||||
complete breakdown of the current state for each shard participating in the snapshot.
|
||||
|
|
|
@ -69,7 +69,7 @@ POST _search
|
|||
}
|
||||
}
|
||||
--------------------------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
|
||||
==== Scoring with `bool.filter`
|
||||
|
||||
|
@ -96,7 +96,7 @@ GET _search
|
|||
}
|
||||
}
|
||||
---------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
|
||||
This `bool` query has a `match_all` query, which assigns a score of `1.0` to
|
||||
all documents.
|
||||
|
@ -119,7 +119,7 @@ GET _search
|
|||
}
|
||||
}
|
||||
---------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
|
||||
This `constant_score` query behaves in exactly the same way as the second example above.
|
||||
The `constant_score` query assigns a score of `1.0` to all documents matched
|
||||
|
@ -140,7 +140,7 @@ GET _search
|
|||
}
|
||||
}
|
||||
---------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
|
||||
==== Using named queries to see which clauses matched
|
||||
|
||||
|
|
|
@ -491,7 +491,7 @@ GET _search
|
|||
}
|
||||
}
|
||||
--------------------------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
|
||||
Next, we show how the computed score looks like for each of the three
|
||||
possible decay functions.
|
||||
|
|
|
@ -13,7 +13,7 @@ POST _search
|
|||
}
|
||||
}
|
||||
--------------------------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
<1> Finds documents which contain the exact term `Kimchy` in the inverted index
|
||||
of the `user` field.
|
||||
|
||||
|
@ -45,7 +45,7 @@ GET _search
|
|||
}
|
||||
}
|
||||
--------------------------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
|
||||
<1> The `urgent` query clause has a boost of `2.0`, meaning it is twice as important
|
||||
as the query clause for `normal`.
|
||||
|
@ -107,7 +107,7 @@ PUT my_index/my_type/1
|
|||
"exact_value": "Quick Foxes!" <4>
|
||||
}
|
||||
--------------------------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
|
||||
<1> The `full_text` field is of type `text` and will be analyzed.
|
||||
<2> The `exact_value` field is of type `keyword` and will NOT be analyzed.
|
||||
|
@ -154,7 +154,7 @@ GET my_index/my_type/_search
|
|||
}
|
||||
}
|
||||
--------------------------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
// TEST[continued]
|
||||
|
||||
<1> This query matches because the `exact_value` field contains the exact
|
||||
|
|
|
@ -24,7 +24,7 @@ PUT _cluster/settings
|
|||
}
|
||||
}
|
||||
--------------------------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
// TEST[skip:indexes don't assign]
|
||||
|
||||
==== Step 2: Perform a synced flush
|
||||
|
@ -36,7 +36,7 @@ Shard recovery will be much faster if you stop indexing and issue a
|
|||
--------------------------------------------------
|
||||
POST _flush/synced
|
||||
--------------------------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
|
||||
A synced flush request is a ``best effort'' operation. It will fail if there
|
||||
are any pending indexing operations, but it is safe to reissue the request
|
||||
|
@ -72,7 +72,7 @@ GET _cat/health
|
|||
|
||||
GET _cat/nodes
|
||||
--------------------------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
|
||||
Use these APIs to check that all nodes have successfully joined the cluster.
|
||||
|
||||
|
@ -104,7 +104,7 @@ PUT _cluster/settings
|
|||
}
|
||||
}
|
||||
------------------------------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
|
||||
The cluster will now start allocating replica shards to all data nodes. At this
|
||||
point it is safe to resume indexing and searching, but your cluster will
|
||||
|
@ -120,7 +120,7 @@ GET _cat/health
|
|||
|
||||
GET _cat/recovery
|
||||
--------------------------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
|
||||
Once the `status` column in the `_cat/health` output has reached `green`, all
|
||||
primary and replica shards have been successfully allocated.
|
||||
|
|
|
@ -28,7 +28,7 @@ PUT _cluster/settings
|
|||
}
|
||||
}
|
||||
--------------------------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
// TEST[skip:indexes don't assign]
|
||||
|
||||
==== Step 2: Stop non-essential indexing and perform a synced flush (Optional)
|
||||
|
@ -41,7 +41,7 @@ will be much faster if you temporarily stop non-essential indexing and issue a
|
|||
--------------------------------------------------
|
||||
POST _flush/synced
|
||||
--------------------------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
|
||||
A synced flush request is a ``best effort'' operation. It will fail if there
|
||||
are any pending indexing operations, but it is safe to reissue the request
|
||||
|
@ -103,7 +103,7 @@ the log file or by checking the output of this request:
|
|||
--------------------------------------------------
|
||||
GET _cat/nodes
|
||||
--------------------------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
|
||||
==== Step 6: Reenable shard allocation
|
||||
|
||||
|
@ -119,7 +119,7 @@ PUT _cluster/settings
|
|||
}
|
||||
}
|
||||
--------------------------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
|
||||
==== Step 7: Wait for the node to recover
|
||||
|
||||
|
@ -131,7 +131,7 @@ request:
|
|||
--------------------------------------------------
|
||||
GET _cat/health
|
||||
--------------------------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
|
||||
Wait for the `status` column to move from `yellow` to `green`. Status `green`
|
||||
means that all primary and replica shards have been allocated.
|
||||
|
@ -164,7 +164,7 @@ recover. The recovery status of individual shards can be monitored with the
|
|||
--------------------------------------------------
|
||||
GET _cat/recovery
|
||||
--------------------------------------------------
|
||||
// AUTOSENSE
|
||||
// CONSOLE
|
||||
|
||||
If you stopped indexing, then it is safe to resume indexing as soon as
|
||||
recovery has completed.
|
||||
|
|
Loading…
Reference in New Issue