[DOCS] Change // CONSOLE comments to [source,console] (#46441) (#46451)

This commit is contained in:
James Rodewig 2019-09-06 11:31:13 -04:00 committed by GitHub
parent 31b4e2f6df
commit c46c57d439
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
166 changed files with 517 additions and 925 deletions

View File

@ -54,7 +54,7 @@ The settings have the form `azure.client.CLIENT_NAME.SETTING_NAME`. By default,
the <<repository-azure-repository-settings,repository setting>> `client`.
For example:
[source,js]
[source,console]
----
PUT _snapshot/my_backup
{
@ -64,7 +64,6 @@ PUT _snapshot/my_backup
}
}
----
// CONSOLE
// TEST[skip:we don't have azure setup while testing this]
Most client settings can be added to the `elasticsearch.yml` configuration file.

View File

@ -50,21 +50,20 @@ For example, to tell {es} to allocate shards from the `test` index to either
`big` or `medium` nodes, use `index.routing.allocation.include`:
+
--
[source,js]
[source,console]
------------------------
PUT test/_settings
{
"index.routing.allocation.include.size": "big,medium"
}
------------------------
// CONSOLE
// TEST[s/^/PUT test\n/]
If you specify multiple filters, all conditions must be satisfied for shards to
be relocated. For example, to move the `test` index to `big` nodes in `rack1`,
you could specify:
[source,js]
[source,console]
------------------------
PUT test/_settings
{
@ -72,7 +71,6 @@ PUT test/_settings
"index.routing.allocation.include.rack": "rack1"
}
------------------------
// CONSOLE
// TEST[s/^/PUT test\n/]
--
@ -106,12 +104,11 @@ The index allocation settings support the following built-in attributes:
You can use wildcards when specifying attribute values, for example:
[source,js]
[source,console]
------------------------
PUT test/_settings
{
"index.routing.allocation.include._ip": "192.168.2.*"
}
------------------------
// CONSOLE
// TEST[skip:indexes don't assign]

View File

@ -13,7 +13,7 @@ This means that, by default, newer indices will be recovered before older indice
Use the per-index dynamically updatable `index.priority` setting to customise
the index prioritization order. For instance:
[source,js]
[source,console]
------------------------------
PUT index_1
@ -33,7 +33,6 @@ PUT index_4
}
}
------------------------------
// CONSOLE
In the above example:
@ -45,12 +44,11 @@ In the above example:
This setting accepts an integer, and can be updated on a live index with the
<<indices-update-settings,update index settings API>>:
[source,js]
[source,console]
------------------------------
PUT index_4/_settings
{
"index.priority": 1
}
------------------------------
// CONSOLE
// TEST[continued]

View File

@ -12,7 +12,7 @@ An error will be thrown if index sorting is activated on an index that contains
For instance the following example shows how to define a sort on a single field:
[source,js]
[source,console]
--------------------------------------------------
PUT twitter
{
@ -31,14 +31,13 @@ PUT twitter
}
}
--------------------------------------------------
// CONSOLE
<1> This index is sorted by the `date` field
<2> ... in descending order.
It is also possible to sort the index by more than one field:
[source,js]
[source,console]
--------------------------------------------------
PUT twitter
{
@ -61,7 +60,6 @@ PUT twitter
}
}
--------------------------------------------------
// CONSOLE
<1> This index is sorted by `username` first then by `date`
<2> ... in ascending order for the `username` field and in descending order for the `date` field.
@ -112,7 +110,7 @@ Though when the index sort and the search sort are the same it is possible to li
the number of documents that should be visited per segment to retrieve the N top ranked documents globally.
For example, let's say we have an index that contains events sorted by a timestamp field:
[source,js]
[source,console]
--------------------------------------------------
PUT events
{
@ -131,13 +129,12 @@ PUT events
}
}
--------------------------------------------------
// CONSOLE
<1> This index is sorted by timestamp in descending order (most recent first)
You can search for the last 10 events with:
[source,js]
[source,console]
--------------------------------------------------
GET /events/_search
{
@ -147,7 +144,6 @@ GET /events/_search
]
}
--------------------------------------------------
// CONSOLE
// TEST[continued]
Elasticsearch will detect that the top docs of each segment are already sorted in the index
@ -159,7 +155,7 @@ If you're only looking for the last 10 events and have no interest in
the total number of documents that match the query you can set `track_total_hits`
to false:
[source,js]
[source,console]
--------------------------------------------------
GET /events/_search
{
@ -170,7 +166,6 @@ GET /events/_search
"track_total_hits": false
}
--------------------------------------------------
// CONSOLE
// TEST[continued]
<1> The index sort will be used to rank the top documents and each segment will early terminate the collection after the first 10 matches.

View File

@ -18,7 +18,7 @@ can be configured via the index settings as shown below. The index
options can be provided when creating an index or updating index
settings.
[source,js]
[source,console]
--------------------------------------------------
PUT /index
{
@ -37,12 +37,11 @@ PUT /index
}
}
--------------------------------------------------
// CONSOLE
Here we configure the DFRSimilarity so it can be referenced as
`my_similarity` in mappings as is illustrate in the below example:
[source,js]
[source,console]
--------------------------------------------------
PUT /index/_mapping
{
@ -51,7 +50,6 @@ PUT /index/_mapping
}
}
--------------------------------------------------
// CONSOLE
// TEST[continued]
[float]
@ -190,7 +188,7 @@ A similarity that allows you to use a script in order to specify how scores
should be computed. For instance, the below example shows how to reimplement
TF-IDF:
[source,js]
[source,console]
--------------------------------------------------
PUT /index
{
@ -237,7 +235,6 @@ GET /index/_search?explain=true
}
}
--------------------------------------------------
// CONSOLE
Which yields:
@ -357,7 +354,7 @@ document-independent contribution to the score.
The below configuration will give the same tf-idf scores but is slightly
more efficient:
[source,js]
[source,console]
--------------------------------------------------
PUT /index
{
@ -385,11 +382,10 @@ PUT /index
}
}
--------------------------------------------------
// CONSOLE
////////////////////
[source,js]
[source,console]
--------------------------------------------------
PUT /index/_doc/1
{
@ -413,7 +409,6 @@ GET /index/_search?explain=true
}
}
--------------------------------------------------
// CONSOLE
// TEST[continued]
[source,js]
@ -523,7 +518,7 @@ By default, Elasticsearch will use whatever similarity is configured as
You can change the default similarity for all fields in an index when
it is <<indices-create-index,created>>:
[source,js]
[source,console]
--------------------------------------------------
PUT /index
{
@ -538,13 +533,12 @@ PUT /index
}
}
--------------------------------------------------
// CONSOLE
If you want to change the default similarity after creating the index
you must <<indices-open-close,close>> your index, send the following
request and <<indices-open-close,open>> it again afterwards:
[source,js]
[source,console]
--------------------------------------------------
POST /index/_close
@ -561,5 +555,4 @@ PUT /index/_settings
POST /index/_open
--------------------------------------------------
// CONSOLE
// TEST[continued]

View File

@ -29,7 +29,7 @@ index.search.slowlog.level: info
All of the above settings are _dynamic_ and can be set for each index using the
<<indices-update-settings, update indices settings>> API. For example:
[source,js]
[source,console]
--------------------------------------------------
PUT /twitter/_settings
{
@ -44,7 +44,6 @@ PUT /twitter/_settings
"index.search.slowlog.level": "info"
}
--------------------------------------------------
// CONSOLE
// TEST[setup:twitter]
By default, none are enabled (set to `-1`). Levels (`warn`, `info`,
@ -140,7 +139,7 @@ index.indexing.slowlog.source: 1000
All of the above settings are _dynamic_ and can be set for each index using the
<<indices-update-settings, update indices settings>> API. For example:
[source,js]
[source,console]
--------------------------------------------------
PUT /twitter/_settings
{
@ -152,7 +151,6 @@ PUT /twitter/_settings
"index.indexing.slowlog.source": "1000"
}
--------------------------------------------------
// CONSOLE
// TEST[setup:twitter]
By default Elasticsearch will log the first 1000 characters of the _source in

View File

@ -22,7 +22,7 @@ index.store.type: niofs
It is a _static_ setting that can be set on a per-index basis at index
creation time:
[source,js]
[source,console]
---------------------------------
PUT /my_index
{
@ -31,7 +31,6 @@ PUT /my_index
}
}
---------------------------------
// CONSOLE
WARNING: This is an expert-only setting and may be removed in the future.
@ -112,7 +111,7 @@ index.store.preload: ["nvd", "dvd"]
or in the index settings at index creation time:
[source,js]
[source,console]
---------------------------------
PUT /my_index
{
@ -121,7 +120,6 @@ PUT /my_index
}
}
---------------------------------
// CONSOLE
The default value is the empty array, which means that nothing will be loaded
into the file-system cache eagerly. For indices that are actively searched,

View File

@ -8,11 +8,10 @@ Creates or updates an index alias.
include::alias-exists.asciidoc[tag=index-alias-def]
[source,js]
[source,console]
----
PUT /twitter/_alias/alias1
----
// CONSOLE
// TEST[setup:twitter]
@ -68,11 +67,10 @@ include::{docdir}/rest-api/common-parms.asciidoc[tag=index-routing]
The following request creates an alias, `2030`,
for the `logs_20302801` index.
[source,js]
[source,console]
--------------------------------------------------
PUT /logs_20302801/_alias/2030
--------------------------------------------------
// CONSOLE
// TEST[s/^/PUT logs_20302801\n/]
[[add-alias-api-user-ex]]
@ -81,7 +79,7 @@ PUT /logs_20302801/_alias/2030
First, create an index, `users`,
with a mapping for the `user_id` field:
[source,js]
[source,console]
--------------------------------------------------
PUT /users
{
@ -92,11 +90,10 @@ PUT /users
}
}
--------------------------------------------------
// CONSOLE
Then add the index alias for a specific user, `user_12`:
[source,js]
[source,console]
--------------------------------------------------
PUT /users/_alias/user_12
{
@ -108,7 +105,6 @@ PUT /users/_alias/user_12
}
}
--------------------------------------------------
// CONSOLE
// TEST[continued]
[[alias-index-creation]]
@ -117,7 +113,7 @@ PUT /users/_alias/user_12
You can use the <<create-index-aliases,create index API>>
to add an index alias during index creation.
[source,js]
[source,console]
--------------------------------------------------
PUT /logs_20302801
{
@ -136,4 +132,3 @@ PUT /logs_20302801
}
}
--------------------------------------------------
// CONSOLE

View File

@ -15,11 +15,10 @@ The returned HTTP status code indicates whether the index alias exists or not.
A `404` means it does not exist,
and `200` means it does.
[source,js]
[source,console]
----
HEAD /_alias/alias1
----
// CONSOLE
// TEST[setup:twitter]
// TEST[s/^/PUT twitter\/_alias\/alias1\n/]
@ -68,11 +67,10 @@ Indicates one or more specified index aliases **do not** exist.
[[alias-exists-api-example]]
==== {api-examples-title}
[source,js]
[source,console]
----
HEAD /_alias/2030
HEAD /_alias/20*
HEAD /logs_20302801/_alias/*
----
// CONSOLE
// TEST[s/^/PUT logs_20302801\nPUT logs_20302801\/_alias\/2030\n/]

View File

@ -8,7 +8,7 @@ Adds or removes index aliases.
include::alias-exists.asciidoc[tag=index-alias-def]
[source,js]
[source,console]
----
POST /_aliases
{
@ -17,7 +17,6 @@ POST /_aliases
]
}
----
// CONSOLE
// TEST[setup:twitter]
@ -153,7 +152,7 @@ See <<aliases-routing>> for an example.
The following request adds the `alias1` alias to the `test1` index.
[source,js]
[source,console]
--------------------------------------------------
POST /_aliases
{
@ -162,7 +161,6 @@ POST /_aliases
]
}
--------------------------------------------------
// CONSOLE
// TEST[s/^/PUT test1\nPUT test2\n/]
[[indices-aliases-api-remove-alias-ex]]
@ -170,7 +168,7 @@ POST /_aliases
The following request removes the `alias1` alias.
[source,js]
[source,console]
--------------------------------------------------
POST /_aliases
{
@ -179,7 +177,6 @@ POST /_aliases
]
}
--------------------------------------------------
// CONSOLE
// TEST[continued]
[[indices-aliases-api-rename-alias-ex]]
@ -189,7 +186,7 @@ Renaming an alias is a simple `remove` then `add` operation within the
same API. This operation is atomic, no need to worry about a short
period of time where the alias does not point to an index:
[source,js]
[source,console]
--------------------------------------------------
POST /_aliases
{
@ -199,7 +196,6 @@ POST /_aliases
]
}
--------------------------------------------------
// CONSOLE
// TEST[continued]
[[indices-aliases-api-add-multi-alias-ex]]
@ -208,7 +204,7 @@ POST /_aliases
Associating an alias with more than one index is simply several `add`
actions:
[source,js]
[source,console]
--------------------------------------------------
POST /_aliases
{
@ -218,12 +214,11 @@ POST /_aliases
]
}
--------------------------------------------------
// CONSOLE
// TEST[s/^/PUT test1\nPUT test2\n/]
Multiple indices can be specified for an action with the `indices` array syntax:
[source,js]
[source,console]
--------------------------------------------------
POST /_aliases
{
@ -232,7 +227,6 @@ POST /_aliases
]
}
--------------------------------------------------
// CONSOLE
// TEST[s/^/PUT test1\nPUT test2\n/]
To specify multiple aliases in one action, the corresponding `aliases` array
@ -241,7 +235,7 @@ syntax exists as well.
For the example above, a glob pattern can also be used to associate an alias to
more than one index that share a common name:
[source,js]
[source,console]
--------------------------------------------------
POST /_aliases
{
@ -250,7 +244,6 @@ POST /_aliases
]
}
--------------------------------------------------
// CONSOLE
// TEST[s/^/PUT test1\nPUT test2\n/]
In this case, the alias is a point-in-time alias that will group all
@ -261,7 +254,7 @@ It is an error to index to an alias which points to more than one index.
It is also possible to swap an index with an alias in one operation:
[source,js]
[source,console]
--------------------------------------------------
PUT test <1>
PUT test_2 <2>
@ -273,7 +266,7 @@ POST /_aliases
]
}
--------------------------------------------------
// CONSOLE
<1> An index we've added by mistake
<2> The index we should have added
<3> `remove_index` is just like <<indices-delete-index>>
@ -289,7 +282,7 @@ this alias.
To create a filtered alias, first we need to ensure that the fields already
exist in the mapping:
[source,js]
[source,console]
--------------------------------------------------
PUT /test1
{
@ -302,11 +295,10 @@ PUT /test1
}
}
--------------------------------------------------
// CONSOLE
Now we can create an alias that uses a filter on field `user`:
[source,js]
[source,console]
--------------------------------------------------
POST /_aliases
{
@ -321,7 +313,6 @@ POST /_aliases
]
}
--------------------------------------------------
// CONSOLE
// TEST[continued]
[[aliases-routing]]
@ -335,7 +326,7 @@ The following command creates a new alias `alias1` that points to index
`test`. After `alias1` is created, all operations with this alias are
automatically modified to use value `1` for routing:
[source,js]
[source,console]
--------------------------------------------------
POST /_aliases
{
@ -350,13 +341,12 @@ POST /_aliases
]
}
--------------------------------------------------
// CONSOLE
// TEST[s/^/PUT test\n/]
It's also possible to specify different routing values for searching
and indexing operations:
[source,js]
[source,console]
--------------------------------------------------
POST /_aliases
{
@ -372,7 +362,6 @@ POST /_aliases
]
}
--------------------------------------------------
// CONSOLE
// TEST[s/^/PUT test\n/]
As shown in the example above, search routing may contain several values
@ -383,11 +372,10 @@ intersection of both search alias routing and routing specified in the
parameter is used. For example the following command will use "2" as a
routing value:
[source,js]
[source,console]
--------------------------------------------------
GET /alias2/_search?q=user:kimchy&routing=2,3
--------------------------------------------------
// CONSOLE
// TEST[continued]
[[aliases-write-index]]
@ -405,7 +393,7 @@ and index creation API.
Setting an index to be the write index with an alias also affects how the alias is manipulated during
Rollover (see <<indices-rollover-index, Rollover With Write Index>>).
[source,js]
[source,console]
--------------------------------------------------
POST /_aliases
{
@ -426,36 +414,33 @@ POST /_aliases
]
}
--------------------------------------------------
// CONSOLE
// TEST[s/^/PUT test\nPUT test2\n/]
In this example, we associate the alias `alias1` to both `test` and `test2`, where
`test` will be the index chosen for writing to.
[source,js]
[source,console]
--------------------------------------------------
PUT /alias1/_doc/1
{
"foo": "bar"
}
--------------------------------------------------
// CONSOLE
// TEST[continued]
The new document that was indexed to `/alias1/_doc/1` will be indexed as if it were
`/test/_doc/1`.
[source,js]
[source,console]
--------------------------------------------------
GET /test/_doc/1
--------------------------------------------------
// CONSOLE
// TEST[continued]
To swap which index is the write index for an alias, the Aliases API can be leveraged to
do an atomic swap. The swap is not dependent on the ordering of the actions.
[source,js]
[source,console]
--------------------------------------------------
POST /_aliases
{
@ -476,5 +461,4 @@ POST /_aliases
]
}
--------------------------------------------------
// CONSOLE
// TEST[s/^/PUT test\nPUT test2\n/]

View File

@ -7,7 +7,7 @@
Performs <<analysis,analysis>> on a text string
and returns the resulting tokens.
[source,js]
[source,console]
--------------------------------------------------
GET /_analyze
{
@ -15,7 +15,6 @@ GET /_analyze
"text" : "Quick Brown Foxes!"
}
--------------------------------------------------
// CONSOLE
[[analyze-api-request]]
@ -137,7 +136,7 @@ See <<analysis-tokenizers>> for a list of tokenizers.
You can apply any of the built-in analyzers to the text string without
specifying an index.
[source,js]
[source,console]
--------------------------------------------------
GET /_analyze
{
@ -145,14 +144,13 @@ GET /_analyze
"text" : "this is a test"
}
--------------------------------------------------
// CONSOLE
[[analyze-api-text-array-ex]]
===== Array of text strings
If the `text` parameter is provided as array of strings, it is analyzed as a multi-value field.
[source,js]
[source,console]
--------------------------------------------------
GET /_analyze
{
@ -160,7 +158,6 @@ GET /_analyze
"text" : ["this is a test", "the second text"]
}
--------------------------------------------------
// CONSOLE
[[analyze-api-custom-analyzer-ex]]
===== Custom analyzer
@ -169,7 +166,7 @@ You can use the analyze API to test a custom transient analyzer built from
tokenizers, token filters, and char filters. Token filters use the `filter`
parameter:
[source,js]
[source,console]
--------------------------------------------------
GET /_analyze
{
@ -178,9 +175,8 @@ GET /_analyze
"text" : "this is a test"
}
--------------------------------------------------
// CONSOLE
[source,js]
[source,console]
--------------------------------------------------
GET /_analyze
{
@ -190,13 +186,12 @@ GET /_analyze
"text" : "this is a <b>test</b>"
}
--------------------------------------------------
// CONSOLE
deprecated[5.0.0, Use `filter`/`char_filter` instead of `filters`/`char_filters` and `token_filters` has been removed]
Custom tokenizers, token filters, and character filters can be specified in the request body as follows:
[source,js]
[source,console]
--------------------------------------------------
GET /_analyze
{
@ -205,28 +200,26 @@ GET /_analyze
"text" : "this is a test"
}
--------------------------------------------------
// CONSOLE
[[analyze-api-specific-index-ex]]
===== Specific index
You can also run the analyze API against a specific index:
[source,js]
[source,console]
--------------------------------------------------
GET /analyze_sample/_analyze
{
"text" : "this is a test"
}
--------------------------------------------------
// CONSOLE
// TEST[setup:analyze_sample]
The above will run an analysis on the "this is a test" text, using the
default index analyzer associated with the `analyze_sample` index. An `analyzer`
can also be provided to use a different analyzer:
[source,js]
[source,console]
--------------------------------------------------
GET /analyze_sample/_analyze
{
@ -234,7 +227,6 @@ GET /analyze_sample/_analyze
"text" : "this is a test"
}
--------------------------------------------------
// CONSOLE
// TEST[setup:analyze_sample]
[[analyze-api-field-ex]]
@ -242,7 +234,7 @@ GET /analyze_sample/_analyze
The analyzer can be derived based on a field mapping, for example:
[source,js]
[source,console]
--------------------------------------------------
GET /analyze_sample/_analyze
{
@ -250,7 +242,6 @@ GET /analyze_sample/_analyze
"text" : "this is a test"
}
--------------------------------------------------
// CONSOLE
// TEST[setup:analyze_sample]
Will cause the analysis to happen based on the analyzer configured in the
@ -261,7 +252,7 @@ mapping for `obj1.field1` (and if not, the default index analyzer).
A `normalizer` can be provided for keyword field with normalizer associated with the `analyze_sample` index.
[source,js]
[source,console]
--------------------------------------------------
GET /analyze_sample/_analyze
{
@ -269,12 +260,11 @@ GET /analyze_sample/_analyze
"text" : "BaR"
}
--------------------------------------------------
// CONSOLE
// TEST[setup:analyze_sample]
Or by building a custom transient normalizer out of token filters and char filters.
[source,js]
[source,console]
--------------------------------------------------
GET /_analyze
{
@ -282,7 +272,6 @@ GET /_analyze
"text" : "BaR"
}
--------------------------------------------------
// CONSOLE
[[explain-analyze-api]]
===== Explain analyze
@ -292,7 +281,7 @@ You can filter token attributes you want to output by setting `attributes` optio
NOTE: The format of the additional detail information is labelled as experimental in Lucene and it may change in the future.
[source,js]
[source,console]
--------------------------------------------------
GET /_analyze
{
@ -303,7 +292,7 @@ GET /_analyze
"attributes" : ["keyword"] <1>
}
--------------------------------------------------
// CONSOLE
<1> Set "keyword" to output "keyword" attribute only
The request returns the following result:
@ -367,7 +356,7 @@ The following setting allows to limit the number of tokens that can be produced:
the limit for a specific index:
[source,js]
[source,console]
--------------------------------------------------
PUT /analyze_sample
{
@ -376,15 +365,13 @@ PUT /analyze_sample
}
}
--------------------------------------------------
// CONSOLE
[source,js]
[source,console]
--------------------------------------------------
GET /analyze_sample/_analyze
{
"text" : "this is a test"
}
--------------------------------------------------
// CONSOLE
// TEST[setup:analyze_sample]

View File

@ -39,11 +39,10 @@ limitation might be removed in the future.
The following example freezes and unfreezes an index:
[source,js]
[source,console]
--------------------------------------------------
POST /my_index/_freeze
POST /my_index/_unfreeze
--------------------------------------------------
// CONSOLE
// TEST[s/^/PUT my_index\n/]

View File

@ -11,7 +11,7 @@ Synonym filters (both `synonym` and `synonym_graph`) can be declared as
updateable if they are only used in <<search-analyzer,search analyzers>>
with the `updateable` flag:
[source,js]
[source,console]
--------------------------------------------------
PUT /my_index
{
@ -45,7 +45,6 @@ PUT /my_index
}
}
--------------------------------------------------
// CONSOLE
<1> Mark the synonym filter as updateable.
<2> Synonym analyzer is usable as a search_analyzer.
@ -64,11 +63,10 @@ to update the synonym file contents on every data node (even the ones that don't
hold shard copies; shards might be relocated there in the future) before calling
reload to ensure the new state of the file is reflected everywhere in the cluster.
[source,js]
[source,console]
--------------------------------------------------
POST /my_index/_reload_search_analyzers
--------------------------------------------------
// CONSOLE
// TEST[continued]
The reload request returns information about the nodes it was executed on and the

View File

@ -38,10 +38,9 @@ limitation might be removed in the future.
The following example freezes and unfreezes an index:
[source,js]
[source,console]
--------------------------------------------------
POST /my_index/_freeze
POST /my_index/_unfreeze
--------------------------------------------------
// CONSOLE
// TEST[s/^/PUT my_index\n/]

View File

@ -4,24 +4,23 @@
The clear cache API allows to clear either all caches or specific cached
associated with one or more indices.
[source,js]
[source,console]
--------------------------------------------------
POST /twitter/_cache/clear
--------------------------------------------------
// CONSOLE
// TEST[setup:twitter]
The API, by default, will clear all caches. Specific caches can be cleaned
explicitly by setting the `query`, `fielddata` or `request` url parameter to `true`.
[source,js]
[source,console]
--------------------------------------------------
POST /twitter/_cache/clear?query=true <1>
POST /twitter/_cache/clear?request=true <2>
POST /twitter/_cache/clear?fielddata=true <3>
--------------------------------------------------
// CONSOLE
// TEST[continued]
<1> Cleans only the query cache
<2> Cleans only the request cache
<3> Cleans only the fielddata cache
@ -31,12 +30,12 @@ cleared by specifying `fields` url parameter with a comma delimited list of
the fields that should be cleared. Note that the provided names must refer to
concrete fields -- objects and field aliases are not supported.
[source,js]
[source,console]
--------------------------------------------------
POST /twitter/_cache/clear?fields=foo,bar <1>
--------------------------------------------------
// CONSOLE
// TEST[continued]
<1> Clear the cache for the `foo` an `bar` field
[float]
@ -45,11 +44,10 @@ POST /twitter/_cache/clear?fields=foo,bar <1>
The clear cache API can be applied to more than one index with a single
call, or even on `_all` the indices.
[source,js]
[source,console]
--------------------------------------------------
POST /kimchy,elasticsearch/_cache/clear
POST /_cache/clear
--------------------------------------------------
// CONSOLE
// TEST[s/^/PUT kimchy\nPUT elasticsearch\n/]

View File

@ -25,7 +25,7 @@ Cloning works as follows:
Create a new index:
[source,js]
[source,console]
--------------------------------------------------
PUT my_source_index
{
@ -34,14 +34,13 @@ PUT my_source_index
}
}
--------------------------------------------------
// CONSOLE
In order to clone an index, the index must be marked as read-only,
and have <<cluster-health,health>> `green`.
This can be achieved with the following request:
[source,js]
[source,console]
--------------------------------------------------
PUT /my_source_index/_settings
{
@ -50,7 +49,6 @@ PUT /my_source_index/_settings
}
}
--------------------------------------------------
// CONSOLE
// TEST[continued]
<1> Prevents write operations to this index while still allowing metadata
@ -62,11 +60,10 @@ PUT /my_source_index/_settings
To clone `my_source_index` into a new index called `my_target_index`, issue
the following request:
[source,js]
[source,console]
--------------------------------------------------
POST my_source_index/_clone/my_target_index
--------------------------------------------------
// CONSOLE
// TEST[continued]
The above request returns immediately once the target index has been added to
@ -89,7 +86,7 @@ Indices can only be cloned if they satisfy the following requirements:
The `_clone` API is similar to the <<indices-create-index, `create index` API>>
and accepts `settings` and `aliases` parameters for the target index:
[source,js]
[source,console]
--------------------------------------------------
POST my_source_index/_clone/my_target_index
{
@ -101,7 +98,6 @@ POST my_source_index/_clone/my_target_index
}
}
--------------------------------------------------
// CONSOLE
// TEST[s/^/PUT my_source_index\n{"settings": {"index.blocks.write": true, "index.number_of_shards": "5"}}\n/]
<1> The number of shards in the target index. This must be equal to the

View File

@ -6,11 +6,10 @@
Closes an index.
[source,js]
[source,console]
--------------------------------------------------
POST /twitter/_close
--------------------------------------------------
// CONSOLE
// TEST[setup:twitter]
@ -61,11 +60,10 @@ include::{docdir}/rest-api/common-parms.asciidoc[tag=timeoutparms]
The following example shows how to close an index:
[source,js]
[source,console]
--------------------------------------------------
POST /my_index/_close
--------------------------------------------------
// CONSOLE
// TEST[s/^/PUT my_index\n/]
The API returns following response:

View File

@ -6,11 +6,10 @@
Creates a new index.
[source,js]
[source,console]
--------------------------------------------------
PUT /twitter
--------------------------------------------------
// CONSOLE
[[indices-create-api-request]]
@ -77,7 +76,7 @@ include::{docdir}/rest-api/common-parms.asciidoc[tag=settings]
Each index created can have specific settings
associated with it, defined in the body:
[source,js]
[source,console]
--------------------------------------------------
PUT /twitter
{
@ -89,13 +88,13 @@ PUT /twitter
}
}
--------------------------------------------------
// CONSOLE
<1> Default for `number_of_shards` is 1
<2> Default for `number_of_replicas` is 1 (ie one replica for each primary shard)
or more simplified
[source,js]
[source,console]
--------------------------------------------------
PUT /twitter
{
@ -105,7 +104,6 @@ PUT /twitter
}
}
--------------------------------------------------
// CONSOLE
[NOTE]
You do not have to explicitly specify `index` section inside the
@ -120,7 +118,7 @@ that can be set when creating an index, please check the
The create index API allows for providing a mapping definition:
[source,js]
[source,console]
--------------------------------------------------
PUT /test
{
@ -134,7 +132,6 @@ PUT /test
}
}
--------------------------------------------------
// CONSOLE
NOTE: Before 7.0.0, the 'mappings' definition used to include a type name. Although specifying
types in requests is now deprecated, a type can still be provided if the request parameter
@ -145,7 +142,7 @@ include_type_name is set. For more details, please see <<removal-of-types>>.
The create index API allows also to provide a set of <<indices-aliases,aliases>>:
[source,js]
[source,console]
--------------------------------------------------
PUT /test
{
@ -160,7 +157,6 @@ PUT /test
}
}
--------------------------------------------------
// CONSOLE
[[create-index-wait-for-active-shards]]
===== Wait For active shards
@ -193,7 +189,7 @@ We can change the default of only waiting for the primary shards to start throug
setting `index.write.wait_for_active_shards` (note that changing this setting will also affect
the `wait_for_active_shards` value on all subsequent write operations):
[source,js]
[source,console]
--------------------------------------------------
PUT /test
{
@ -202,16 +198,14 @@ PUT /test
}
}
--------------------------------------------------
// CONSOLE
// TEST[skip:requires two nodes]
or through the request parameter `wait_for_active_shards`:
[source,js]
[source,console]
--------------------------------------------------
PUT /test?wait_for_active_shards=2
--------------------------------------------------
// CONSOLE
// TEST[skip:requires two nodes]
A detailed explanation of `wait_for_active_shards` and its possible values can be found

View File

@ -8,11 +8,10 @@ Deletes an existing index alias.
include::alias-exists.asciidoc[tag=index-alias-def]
[source,js]
[source,console]
----
DELETE /twitter/_alias/alias1
----
// CONSOLE
// TEST[setup:twitter]
// TEST[s/^/PUT twitter\/_alias\/alias1\n/]

View File

@ -7,7 +7,7 @@
Deletes an existing index.
////
[source,js]
[source,console]
--------------------------------------------------
PUT _template/template_1
{
@ -17,15 +17,13 @@ PUT _template/template_1
}
}
--------------------------------------------------
// CONSOLE
// TESTSETUP
////
[source,js]
[source,console]
--------------------------------------------------
DELETE /_template/template_1
--------------------------------------------------
// CONSOLE
[[delete-template-api-request]]

View File

@ -6,11 +6,10 @@
Deletes an existing index.
[source,js]
[source,console]
--------------------------------------------------
DELETE /twitter
--------------------------------------------------
// CONSOLE
// TEST[setup:twitter]

View File

@ -22,11 +22,10 @@ call the flush API after indexing some documents then a successful response
indicates that {es} has flushed all the documents that were indexed before the
flush API was called.
[source,js]
[source,console]
--------------------------------------------------
POST twitter/_flush
--------------------------------------------------
// CONSOLE
// TEST[setup:twitter]
[float]
@ -53,13 +52,12 @@ uncommitted changes are present. This parameter should be considered internal.
The flush API can be applied to more than one index with a single call, or even
on `_all` the indices.
[source,js]
[source,console]
--------------------------------------------------
POST kimchy,elasticsearch/_flush
POST _flush
--------------------------------------------------
// CONSOLE
// TEST[s/^/PUT kimchy\nPUT elasticsearch\n/]
[[synced-flush-api]]
@ -88,11 +86,10 @@ marker, recovery of this kind of cluster would be much slower.
To check whether a shard has a `sync_id` marker or not, look for the `commit`
section of the shard stats returned by the <<indices-stats,indices stats>> API:
[source,sh]
[source,console]
--------------------------------------------------
GET twitter/_stats?filter_path=**.commit&level=shards <1>
--------------------------------------------------
// CONSOLE
// TEST[s/^/PUT twitter\nPOST twitter\/_flush\/synced\n/]
<1> `filter_path` is used to reduce the verbosity of the response, but is entirely optional
@ -156,11 +153,10 @@ shards will fail to sync-flush. The successfully sync-flushed shards will have
faster recovery times as long as the `sync_id` marker is not removed by a
subsequent flush.
[source,sh]
[source,console]
--------------------------------------------------
POST twitter/_flush/synced
--------------------------------------------------
// CONSOLE
// TEST[setup:twitter]
The response contains details about how many shards were successfully
@ -256,10 +252,9 @@ be `409 Conflict`.
The synced flush API can be applied to more than one index with a single call,
or even on `_all` the indices.
[source,js]
[source,console]
--------------------------------------------------
POST kimchy,elasticsearch/_flush/synced
POST _flush/synced
--------------------------------------------------
// CONSOLE

View File

@ -20,11 +20,10 @@ is lost before completion then the force merge process will continue in the
background. Any new requests to force merge the same indices will also block
until the ongoing force merge is complete.
[source,js]
[source,console]
--------------------------------------------------
POST /twitter/_forcemerge
--------------------------------------------------
// CONSOLE
// TEST[setup:twitter]
Force-merging can be useful with time-based indices and when using
@ -32,11 +31,10 @@ Force-merging can be useful with time-based indices and when using
indexing traffic for a certain period of time, and once an index will receive
no more writes its shards can be force-merged down to a single segment:
[source,js]
[source,console]
--------------------------------------------------
POST /logs-000001/_forcemerge?max_num_segments=1
--------------------------------------------------
// CONSOLE
// TEST[setup:twitter]
// TEST[s/logs-000001/twitter/]
@ -64,11 +62,10 @@ deletes. Defaults to `false`. Note that this won't override the
`flush`:: Should a flush be performed after the forced merge. Defaults to
`true`.
[source,js]
[source,console]
--------------------------------------------------
POST /kimchy/_forcemerge?only_expunge_deletes=false&max_num_segments=100&flush=true
--------------------------------------------------
// CONSOLE
// TEST[s/^/PUT kimchy\n/]
[float]
@ -82,11 +79,10 @@ temporarily increase, up to double its size in case `max_num_segments` is set
to `1`, as all segments need to be rewritten into a new one.
[source,js]
[source,console]
--------------------------------------------------
POST /kimchy,elasticsearch/_forcemerge
POST /_forcemerge
--------------------------------------------------
// CONSOLE
// TEST[s/^/PUT kimchy\nPUT elasticsearch\n/]

View File

@ -8,11 +8,10 @@ Returns information about one or more index aliases.
include::alias-exists.asciidoc[tag=index-alias-def]
[source,js]
[source,console]
----
GET /twitter/_alias/alias1
----
// CONSOLE
// TEST[setup:twitter]
// TEST[s/^/PUT twitter\/_alias\/alias1\n/]
@ -71,7 +70,7 @@ with two aliases:
in the `logs_20302801` index
with a `year` field value of `2030`
[source,js]
[source,console]
--------------------------------------------------
PUT /logs_20302801
{
@ -85,16 +84,14 @@ PUT /logs_20302801
}
}
--------------------------------------------------
// CONSOLE
The following get index alias API request returns all aliases
for the index `logs_20302801`:
[source,js]
[source,console]
--------------------------------------------------
GET /logs_20302801/_alias/*
--------------------------------------------------
// CONSOLE
// TEST[continued]
The API returns the following response:
@ -124,11 +121,10 @@ The API returns the following response:
The following index alias API request returns the `2030` alias:
[source,js]
[source,console]
--------------------------------------------------
GET /_alias/2030
--------------------------------------------------
// CONSOLE
// TEST[continued]
The API returns the following response:
@ -156,11 +152,10 @@ The API returns the following response:
The following index alias API request returns any alias that begin with `20`:
[source,js]
[source,console]
--------------------------------------------------
GET /_alias/20*
--------------------------------------------------
// CONSOLE
// TEST[continued]
The API returns the following response:

View File

@ -8,11 +8,10 @@ Retrieves <<mapping,mapping definitions>> for one or more fields. This is useful
if you don't need the <<indices-get-mapping,complete mapping>> of an index or
your index contains a large number of fields.
[source,js]
[source,console]
----
GET /twitter/_mapping/field/user
----
// CONSOLE
// TEST[setup:twitter]
@ -62,7 +61,7 @@ You can provide field mappings when creating a new index. The following
<<indices-create-index, create index>> API request creates the `publications`
index with several field mappings.
[source,js]
[source,console]
--------------------------------------------------
PUT /publications
{
@ -81,15 +80,13 @@ PUT /publications
}
}
--------------------------------------------------
// CONSOLE
The following returns the mapping of the field `title` only:
[source,js]
[source,console]
--------------------------------------------------
GET publications/_mapping/field/title
--------------------------------------------------
// CONSOLE
// TEST[continued]
The API returns the following response:
@ -119,11 +116,10 @@ The get mapping api allows you to specify a comma-separated list of fields.
For instance to select the `id` of the `author` field, you must use its full name `author.id`.
[source,js]
[source,console]
--------------------------------------------------
GET publications/_mapping/field/author.id,abstract,name
--------------------------------------------------
// CONSOLE
// TEST[continued]
returns:
@ -156,11 +152,10 @@ returns:
The get field mapping API also supports wildcard notation.
[source,js]
[source,console]
--------------------------------------------------
GET publications/_mapping/field/a*
--------------------------------------------------
// CONSOLE
// TEST[continued]
returns:
@ -209,7 +204,7 @@ following syntax: `host:port/<index>/_mapping/field/<field>` where
get mappings for all indices you can use `_all` for `<index>`. The
following are some examples:
[source,js]
[source,console]
--------------------------------------------------
GET /twitter,kimchy/_mapping/field/message
@ -217,6 +212,5 @@ GET /_all/_mapping/field/message,user.id
GET /_all/_mapping/field/*.id
--------------------------------------------------
// CONSOLE
// TEST[setup:twitter]
// TEST[s/^/PUT kimchy\nPUT book\n/]

View File

@ -7,7 +7,7 @@
Returns information about one or more index templates.
////
[source,js]
[source,console]
--------------------------------------------------
PUT _template/template_1
{
@ -17,15 +17,13 @@ PUT _template/template_1
}
}
--------------------------------------------------
// CONSOLE
// TESTSETUP
////
[source,js]
[source,console]
--------------------------------------------------
GET /_template/template_1
--------------------------------------------------
// CONSOLE
[[get-template-api-request]]
@ -62,28 +60,25 @@ include::{docdir}/rest-api/common-parms.asciidoc[tag=master-timeout]
[[get-template-api-multiple-ex]]
===== Get multiple index templates
[source,js]
[source,console]
--------------------------------------------------
GET /_template/template_1,template_2
--------------------------------------------------
// CONSOLE
[[get-template-api-wildcard-ex]]
===== Get index templates using a wildcard expression
[source,js]
[source,console]
--------------------------------------------------
GET /_template/temp*
--------------------------------------------------
// CONSOLE
[[get-template-api-all-ex]]
===== Get all index templates
[source,js]
[source,console]
--------------------------------------------------
GET /_template
--------------------------------------------------
// CONSOLE

View File

@ -6,11 +6,10 @@
Returns information about one or more indexes.
[source,js]
[source,console]
--------------------------------------------------
GET /twitter
--------------------------------------------------
// CONSOLE
// TEST[setup:twitter]
NOTE: Before 7.0.0, the 'mappings' definition used to include a type name. Although mappings

View File

@ -6,11 +6,10 @@
Retrieves <<mapping,mapping definitions>> for indices in a cluster.
[source,js]
[source,console]
--------------------------------------------------
GET /twitter/_mapping
--------------------------------------------------
// CONSOLE
// TEST[setup:twitter]
NOTE: Before 7.0.0, the 'mappings' definition used to include a type name. Although mappings
@ -62,22 +61,20 @@ single call. General usage of the API follows the following syntax:
list of names. To get mappings for all indices you can use `_all` for `{index}`.
The following are some examples:
[source,js]
[source,console]
--------------------------------------------------
GET /twitter,kimchy/_mapping
--------------------------------------------------
// CONSOLE
// TEST[setup:twitter]
// TEST[s/^/PUT kimchy\nPUT book\n/]
If you want to get mappings of all indices and types then the following
two examples are equivalent:
[source,js]
[source,console]
--------------------------------------------------
GET /_all/_mapping
GET /_mapping
--------------------------------------------------
// CONSOLE
// TEST[setup:twitter]

View File

@ -6,11 +6,10 @@
Returns setting information for an index.
[source,js]
[source,console]
--------------------------------------------------
GET /twitter/_settings
--------------------------------------------------
// CONSOLE
// TEST[setup:twitter]
@ -63,7 +62,7 @@ The get settings API can be used to get settings for more than one index with a
single call. To get settings for all indices you can use `_all` for `<index>`.
Wildcard expressions are also supported. The following are some examples:
[source,js]
[source,console]
--------------------------------------------------
GET /twitter,kimchy/_settings
@ -71,7 +70,6 @@ GET /_all/_settings
GET /log_2013_*/_settings
--------------------------------------------------
// CONSOLE
// TEST[setup:twitter]
// TEST[s/^/PUT kimchy\nPUT log_2013_01_01\n/]
@ -80,9 +78,8 @@ GET /log_2013_*/_settings
The settings that are returned can be filtered with wildcard matching
as follows:
[source,js]
[source,console]
--------------------------------------------------
GET /log_2013_-*/_settings/index.number_*
--------------------------------------------------
// CONSOLE
// TEST[continued]

View File

@ -8,11 +8,10 @@ Checks if an index exists.
The returned HTTP status code indicates if the index exists or not.
A `404` means it does not exist, and `200` means it does.
[source,js]
[source,console]
--------------------------------------------------
HEAD /twitter
--------------------------------------------------
// CONSOLE
// TEST[setup:twitter]

View File

@ -6,11 +6,10 @@
Opens a closed index.
[source,js]
[source,console]
--------------------------------------------------
POST /twitter/_open
--------------------------------------------------
// CONSOLE
// TEST[setup:twitter]
// TEST[s/^/POST \/twitter\/_close\n/]
@ -98,11 +97,10 @@ include::{docdir}/rest-api/common-parms.asciidoc[tag=timeoutparms]
A closed index can be re-opened like this:
[source,js]
[source,console]
--------------------------------------------------
POST /my_index/_open
--------------------------------------------------
// CONSOLE
// TEST[s/^/PUT my_index\nPOST my_index\/_close\n/]
The API returns the following response:

View File

@ -7,7 +7,7 @@
Adds new fields to an existing index or changes the search settings of existing
fields.
[source,js]
[source,console]
----
PUT /twitter/_mapping
{
@ -18,7 +18,6 @@ PUT /twitter/_mapping
}
}
----
// CONSOLE
// TEST[setup:twitter]
NOTE: Before 7.0.0, the 'mappings' definition used to include a type name.
@ -87,16 +86,15 @@ The put mapping API requires an existing index. The following
<<indices-create-index, create index>> API request creates the `publications`
index with no mapping.
[source,js]
[source,console]
----
PUT /publications
----
// CONSOLE
The following put mapping API request adds `title`, a new <<text,`text`>> field,
to the `publications` index.
[source,js]
[source,console]
----
PUT /publications/_mapping
{
@ -105,7 +103,6 @@ PUT /publications/_mapping
}
}
----
// CONSOLE
// TEST[continued]
[[put-mapping-api-multi-ex]]
@ -114,7 +111,7 @@ PUT /publications/_mapping
The PUT mapping API can be applied to multiple indices with a single request.
For example, we can update the `twitter-1` and `twitter-2` mappings at the same time:
[source,js]
[source,console]
--------------------------------------------------
# Create the two indices
PUT /twitter-1
@ -130,7 +127,6 @@ PUT /twitter-1,twitter-2/_mapping <1>
}
}
--------------------------------------------------
// CONSOLE
// TEST[setup:twitter]
<1> Note that the indices specified (`twitter-1,twitter-2`) follows <<multi-index,multiple index names>> and wildcard format.
@ -158,7 +154,7 @@ you only want to rename a field, consider adding an <<alias, `alias`>> field.
For example:
[source,js]
[source,console]
-----------------------------------
PUT /my_index <1>
{
@ -195,7 +191,7 @@ PUT /my_index/_mapping
}
}
-----------------------------------
// CONSOLE
<1> Create an index with a `first` field under the `name` <<object>> field, and a `user_id` field.
<2> Add a `last` field under the `name` object field.
<3> Update the `ignore_above` setting from its default of 0.

View File

@ -6,11 +6,10 @@ Recovery status may be reported for specific indices, or cluster-wide.
For example, the following command would show recovery information for the indices "index1" and "index2".
[source,js]
[source,console]
--------------------------------------------------
GET index1,index2/_recovery?human
--------------------------------------------------
// CONSOLE
// TEST[s/^/PUT index1\nPUT index2\n/]
To see cluster-wide recovery status simply leave out the index names.
@ -21,7 +20,7 @@ Here we create a repository and snapshot index1 in
order to restore it right after and prints out the
indices recovery result.
[source,js]
[source,console]
--------------------------------------------------
# create the index
PUT index1
@ -41,7 +40,6 @@ DELETE index1
POST /_snapshot/my_repository/snap_1/_restore?wait_for_completion=true
--------------------------------------------------
// CONSOLE
[source,console-result]
--------------------------------------------------
@ -62,11 +60,10 @@ POST /_snapshot/my_repository/snap_1/_restore?wait_for_completion=true
//////////////////////////
[source,js]
[source,console]
--------------------------------------------------
GET /_recovery?human
--------------------------------------------------
// CONSOLE
// TEST[continued]
Response:
@ -154,11 +151,10 @@ Additionally, the output shows the number and percent of files recovered, as wel
In some cases a higher level of detail may be preferable. Setting "detailed=true" will present a list of physical files in recovery.
[source,js]
[source,console]
--------------------------------------------------
GET _recovery?human&detailed=true
--------------------------------------------------
// CONSOLE
// TEST[s/^/PUT index1\n{"settings": {"index.number_of_shards": 1}}\n/]
Response:

View File

@ -7,11 +7,10 @@ The (near) real-time capabilities depend on the index engine used. For
example, the internal one requires refresh to be called, but by default a
refresh is scheduled periodically.
[source,js]
[source,console]
--------------------------------------------------
POST /twitter/_refresh
--------------------------------------------------
// CONSOLE
// TEST[setup:twitter]
[float]
@ -20,11 +19,10 @@ POST /twitter/_refresh
The refresh API can be applied to more than one index with a single
call, or even on `_all` the indices.
[source,js]
[source,console]
--------------------------------------------------
POST /kimchy,elasticsearch/_refresh
POST /_refresh
--------------------------------------------------
// CONSOLE
// TEST[s/^/PUT kimchy\nPUT elasticsearch\n/]

View File

@ -36,7 +36,7 @@ The available conditions are:
| max_size | The maximum estimated size of the primary shard of the index
|===
[source,js]
[source,console]
--------------------------------------------------
PUT /logs-000001 <1>
{
@ -56,7 +56,6 @@ POST /logs_write/_rollover <2>
}
}
--------------------------------------------------
// CONSOLE
// TEST[setup:huge_twitter]
// TEST[s/# Add > 1000 documents to logs-000001/POST _reindex?refresh\n{"source":{"index":"twitter"},"dest":{"index":"logs-000001"}}/]
<1> Creates an index called `logs-0000001` with the alias `logs_write`.
@ -98,7 +97,7 @@ of 6, regardless of the old index name.
If the old name doesn't match this pattern then you must specify the name for
the new index as follows:
[source,js]
[source,console]
--------------------------------------------------
POST /my_alias/_rollover/my_new_index_name
{
@ -109,7 +108,6 @@ POST /my_alias/_rollover/my_new_index_name
}
}
--------------------------------------------------
// CONSOLE
// TEST[s/^/PUT my_old_index_name\nPUT my_old_index_name\/_alias\/my_alias\n/]
[float]
@ -122,7 +120,7 @@ index name to end with a dash followed by a number, e.g.
`logstash-2016.02.03-1` which is incremented every time the index is rolled
over. For instance:
[source,js]
[source,console]
--------------------------------------------------
# PUT /<logs-{now/d}-1> with URI encoding:
PUT /%3Clogs-%7Bnow%2Fd%7D-1%3E <1>
@ -148,18 +146,17 @@ POST /logs_write/_rollover <2>
}
}
--------------------------------------------------
// CONSOLE
// TEST[s/now/2016.10.31||/]
<1> Creates an index named with today's date (e.g.) `logs-2016.10.31-1`
<2> Rolls over to a new index with today's date, e.g. `logs-2016.10.31-000002` if run immediately, or `logs-2016.11.01-000002` if run after 24 hours
//////////////////////////
[source,js]
[source,console]
--------------------------------------------------
GET _alias
--------------------------------------------------
// CONSOLE
// TEST[continued]
[source,console-result]
@ -182,12 +179,11 @@ These indices can then be referenced as described in the
<<date-math-index-names,date math documentation>>. For example, to search
over indices created in the last three days, you could do the following:
[source,js]
[source,console]
--------------------------------------------------
# GET /<logs-{now/d}-*>,<logs-{now/d-1d}-*>,<logs-{now/d-2d}-*>/_search
GET /%3Clogs-%7Bnow%2Fd%7D-*%3E%2C%3Clogs-%7Bnow%2Fd-1d%7D-*%3E%2C%3Clogs-%7Bnow%2Fd-2d%7D-*%3E/_search
--------------------------------------------------
// CONSOLE
// TEST[continued]
// TEST[s/now/2016.10.31||/]
@ -201,7 +197,7 @@ matching <<indices-templates,index templates>>. Additionally, you can specify
override any values set in matching index templates. For example, the following
`rollover` request overrides the `index.number_of_shards` setting:
[source,js]
[source,console]
--------------------------------------------------
PUT /logs-000001
{
@ -222,7 +218,6 @@ POST /logs_write/_rollover
}
}
--------------------------------------------------
// CONSOLE
[float]
==== Dry run
@ -230,7 +225,7 @@ POST /logs_write/_rollover
The rollover API supports `dry_run` mode, where request conditions can be
checked without performing the actual rollover:
[source,js]
[source,console]
--------------------------------------------------
PUT /logs-000001
{
@ -248,7 +243,6 @@ POST /logs_write/_rollover?dry_run
}
}
--------------------------------------------------
// CONSOLE
[float]
==== Wait For Active Shards
@ -272,7 +266,7 @@ indices that are being managed with Rollover.
Look at the behavior of the aliases in the following example where `is_write_index` is set on the rolled over index.
[source,js]
[source,console]
--------------------------------------------------
PUT my_logs_index-000001
{
@ -300,7 +294,7 @@ PUT logs/_doc/2 <2>
"message": "a newer log"
}
--------------------------------------------------
// CONSOLE
<1> configures `my_logs_index` as the write index for the `logs` alias
<2> newly indexed documents against the `logs` alias will write to the new index
@ -323,11 +317,10 @@ PUT logs/_doc/2 <2>
--------------------------------------------------
//////////////////////////
[source,js]
[source,console]
--------------------------------------------------
GET _alias
--------------------------------------------------
// CONSOLE
// TEST[continued]
//////////////////////////

View File

@ -15,7 +15,7 @@ for shards, which has unassigned primaries.
Endpoints include shard stores information for a specific index, several
indices, or all:
[source,js]
[source,console]
--------------------------------------------------
# return information of only index test
GET /test/_shard_stores
@ -26,7 +26,6 @@ GET /test1,test2/_shard_stores
# return information of all indices
GET /_shard_stores
--------------------------------------------------
// CONSOLE
// TEST[s/^/PUT test\nPUT test1\nPUT test2\n/]
The scope of shards to list store information can be changed through
@ -35,11 +34,10 @@ shards with at least one unassigned replica and 'red' for shards with unassigned
primary shard.
Use 'green' to list store information for shards with all assigned copies.
[source,js]
[source,console]
--------------------------------------------------
GET /_shard_stores?status=green
--------------------------------------------------
// CONSOLE
// TEST[setup:node]
// TEST[s/^/PUT my-index\n{"settings":{"number_of_shards":1, "number_of_replicas": 0}}\nPOST my-index\/test\?refresh\n{"test": "test"}\n/]

View File

@ -34,7 +34,7 @@ same node and have <<cluster-health,health>> `green`.
These two conditions can be achieved with the following request:
[source,js]
[source,console]
--------------------------------------------------
PUT /my_source_index/_settings
{
@ -44,8 +44,8 @@ PUT /my_source_index/_settings
}
}
--------------------------------------------------
// CONSOLE
// TEST[s/^/PUT my_source_index\n{"settings":{"index.number_of_shards":2}}\n/]
<1> Forces the relocation of a copy of each shard to the node with name
`shrink_node_name`. See <<shard-allocation-filtering>> for more options.
@ -63,7 +63,7 @@ with the `wait_for_no_relocating_shards` parameter.
To shrink `my_source_index` into a new index called `my_target_index`, issue
the following request:
[source,js]
[source,console]
--------------------------------------------------
POST my_source_index/_shrink/my_target_index
{
@ -73,7 +73,6 @@ POST my_source_index/_shrink/my_target_index
}
}
--------------------------------------------------
// CONSOLE
// TEST[continued]
<1> Clear the allocation requirement copied from the source index.
@ -107,7 +106,7 @@ Indices can only be shrunk if they satisfy the following requirements:
The `_shrink` API is similar to the <<indices-create-index, `create index` API>>
and accepts `settings` and `aliases` parameters for the target index:
[source,js]
[source,console]
--------------------------------------------------
POST my_source_index/_shrink/my_target_index
{
@ -121,7 +120,6 @@ POST my_source_index/_shrink/my_target_index
}
}
--------------------------------------------------
// CONSOLE
// TEST[s/^/PUT my_source_index\n{"settings": {"index.number_of_shards":5,"index.blocks.write": true}}\n/]
<1> The number of shards in the target index. This must be a factor of the

View File

@ -85,7 +85,7 @@ compared to searching an index that would have +M+N+ shards.
Create a new index:
[source,js]
[source,console]
--------------------------------------------------
PUT my_source_index
{
@ -94,14 +94,13 @@ PUT my_source_index
}
}
--------------------------------------------------
// CONSOLE
In order to split an index, the index must be marked as read-only,
and have <<cluster-health,health>> `green`.
This can be achieved with the following request:
[source,js]
[source,console]
--------------------------------------------------
PUT /my_source_index/_settings
{
@ -110,7 +109,6 @@ PUT /my_source_index/_settings
}
}
--------------------------------------------------
// CONSOLE
// TEST[continued]
<1> Prevents write operations to this index while still allowing metadata
@ -122,7 +120,7 @@ PUT /my_source_index/_settings
To split `my_source_index` into a new index called `my_target_index`, issue
the following request:
[source,js]
[source,console]
--------------------------------------------------
POST my_source_index/_split/my_target_index
{
@ -131,7 +129,6 @@ POST my_source_index/_split/my_target_index
}
}
--------------------------------------------------
// CONSOLE
// TEST[continued]
The above request returns immediately once the target index has been added to
@ -157,7 +154,7 @@ Indices can only be split if they satisfy the following requirements:
The `_split` API is similar to the <<indices-create-index, `create index` API>>
and accepts `settings` and `aliases` parameters for the target index:
[source,js]
[source,console]
--------------------------------------------------
POST my_source_index/_split/my_target_index
{
@ -169,7 +166,6 @@ POST my_source_index/_split/my_target_index
}
}
--------------------------------------------------
// CONSOLE
// TEST[s/^/PUT my_source_index\n{"settings": {"index.blocks.write": true, "index.number_of_shards": "1"}}\n/]
<1> The number of shards in the target index. This must be a factor of the

View File

@ -8,11 +8,10 @@ Checks if an index template exists.
[source,js]
[source,console]
-----------------------------------------------
HEAD /_template/template_1
-----------------------------------------------
// CONSOLE
[[template-exists-api-request]]

View File

@ -6,7 +6,7 @@
Creates or updates an index template.
[source,js]
[source,console]
--------------------------------------------------
PUT _template/template_1
{
@ -30,7 +30,6 @@ PUT _template/template_1
}
}
--------------------------------------------------
// CONSOLE
// TESTSETUP
@ -128,7 +127,7 @@ This number is not automatically generated by {es}.
You can include <<indices-aliases,index aliases>> in an index template.
[source,js]
[source,console]
--------------------------------------------------
PUT _template/template_1
{
@ -148,7 +147,6 @@ PUT _template/template_1
}
}
--------------------------------------------------
// CONSOLE
// TEST[s/^/DELETE _template\/template_1\n/]
<1> the `{index}` placeholder in the alias name will be replaced with the
@ -164,7 +162,7 @@ of the index. The order of the merging can be controlled using the
`order` parameter, with lower order being applied first, and higher
orders overriding them. For example:
[source,js]
[source,console]
--------------------------------------------------
PUT /_template/template_1
{
@ -190,7 +188,6 @@ PUT /_template/template_2
}
}
--------------------------------------------------
// CONSOLE
// TEST[s/^/DELETE _template\/template_1\n/]
The above will disable storing the `_source`, but
@ -217,7 +214,7 @@ and not automatically generated by {es}.
To unset a `version`,
replace the template without specifying one.
[source,js]
[source,console]
--------------------------------------------------
PUT /_template/template_1
{
@ -229,18 +226,16 @@ PUT /_template/template_1
"version": 123
}
--------------------------------------------------
// CONSOLE
To check the `version`,
you can use the <<indices-get-template, get index template>> API
with the <<common-options-response-filtering, `filter_path`>> query parameter
to return only the version number:
[source,js]
[source,console]
--------------------------------------------------
GET /_template/template_1?filter_path=*.version
--------------------------------------------------
// CONSOLE
// TEST[continued]
The API returns the following response:

View File

@ -5,11 +5,10 @@ deprecated[7.0.0, Types are deprecated and are in the process of being removed.
Used to check if a type/types exists in an index/indices.
[source,js]
[source,console]
--------------------------------------------------
HEAD twitter/_mapping/tweet
--------------------------------------------------
// CONSOLE
// TEST[setup:twitter]
// TEST[warning:Type exists requests are deprecated, as types have been deprecated.]

View File

@ -6,7 +6,7 @@
Changes an <<index-modules-settings,index setting>> in real time.
[source,js]
[source,console]
--------------------------------------------------
PUT /twitter/_settings
{
@ -15,7 +15,6 @@ PUT /twitter/_settings
}
}
--------------------------------------------------
// CONSOLE
// TEST[setup:twitter]
@ -68,7 +67,7 @@ options for the index. See <<index-modules-settings>>.
===== Reset an index setting
To revert a setting to the default value, use `null`. For example:
[source,js]
[source,console]
--------------------------------------------------
PUT /twitter/_settings
{
@ -77,7 +76,6 @@ PUT /twitter/_settings
}
}
--------------------------------------------------
// CONSOLE
// TEST[setup:twitter]
The list of per-index settings which can be updated dynamically on live
@ -93,7 +91,7 @@ the index from being more performant for bulk indexing, and then move it
to more real time indexing state. Before the bulk indexing is started,
use:
[source,js]
[source,console]
--------------------------------------------------
PUT /twitter/_settings
{
@ -102,7 +100,6 @@ PUT /twitter/_settings
}
}
--------------------------------------------------
// CONSOLE
// TEST[setup:twitter]
(Another optimization option is to start the index without any replicas,
@ -111,7 +108,7 @@ and only later adding them, but that really depends on the use case).
Then, once bulk indexing is done, the settings can be updated (back to
the defaults for example):
[source,js]
[source,console]
--------------------------------------------------
PUT /twitter/_settings
{
@ -120,16 +117,14 @@ PUT /twitter/_settings
}
}
--------------------------------------------------
// CONSOLE
// TEST[continued]
And, a force merge should be called:
[source,js]
[source,console]
--------------------------------------------------
POST /twitter/_forcemerge?max_num_segments=5
--------------------------------------------------
// CONSOLE
// TEST[continued]
[[update-settings-analysis]]
@ -144,7 +139,7 @@ and reopen the index.
For example,
the following commands add the `content` analyzer to `myindex`:
[source,js]
[source,console]
--------------------------------------------------
POST /twitter/_close
@ -162,5 +157,4 @@ PUT /twitter/_settings
POST /twitter/_open
--------------------------------------------------
// CONSOLE
// TEST[setup:twitter]

View File

@ -28,7 +28,7 @@ way, the ingest node knows which pipeline to use.
For example:
Create a pipeline
[source,js]
[source,console]
--------------------------------------------------
PUT _ingest/pipeline/my_pipeline_id
{
@ -43,19 +43,16 @@ PUT _ingest/pipeline/my_pipeline_id
]
}
--------------------------------------------------
// CONSOLE
// TEST
Index with defined pipeline
[source,js]
[source,console]
--------------------------------------------------
PUT my-index/_doc/my-id?pipeline=my_pipeline_id
{
"foo": "bar"
}
--------------------------------------------------
// CONSOLE
// TEST[continued]
Response

View File

@ -5,7 +5,7 @@ The delete pipeline API deletes pipelines by ID or wildcard match (`my-*`, `*`).
//////////////////////////
[source,js]
[source,console]
--------------------------------------------------
PUT _ingest/pipeline/my-pipeline-id
{
@ -21,15 +21,13 @@ PUT _ingest/pipeline/my-pipeline-id
]
}
--------------------------------------------------
// CONSOLE
//////////////////////////
[source,js]
[source,console]
--------------------------------------------------
DELETE _ingest/pipeline/my-pipeline-id
--------------------------------------------------
// CONSOLE
// TEST[continued]
//////////////////////////
@ -41,7 +39,7 @@ DELETE _ingest/pipeline/my-pipeline-id
}
--------------------------------------------------
[source,js]
[source,console]
--------------------------------------------------
PUT _ingest/pipeline/wild-one
{
@ -55,15 +53,13 @@ PUT _ingest/pipeline/wild-two
"processors" : [ ]
}
--------------------------------------------------
// CONSOLE
//////////////////////////
[source,js]
[source,console]
--------------------------------------------------
DELETE _ingest/pipeline/*
--------------------------------------------------
// CONSOLE
//////////////////////////

View File

@ -5,7 +5,7 @@ The get pipeline API returns pipelines based on ID. This API always returns a lo
//////////////////////////
[source,js]
[source,console]
--------------------------------------------------
PUT _ingest/pipeline/my-pipeline-id
{
@ -20,15 +20,13 @@ PUT _ingest/pipeline/my-pipeline-id
]
}
--------------------------------------------------
// CONSOLE
//////////////////////////
[source,js]
[source,console]
--------------------------------------------------
GET _ingest/pipeline/my-pipeline-id
--------------------------------------------------
// CONSOLE
// TEST[continued]
Example response:
@ -64,7 +62,7 @@ field is completely optional and it is meant solely for external management of
pipelines. To unset a `version`, simply replace the pipeline without specifying
one.
[source,js]
[source,console]
--------------------------------------------------
PUT _ingest/pipeline/my-pipeline-id
{
@ -80,17 +78,15 @@ PUT _ingest/pipeline/my-pipeline-id
]
}
--------------------------------------------------
// CONSOLE
To check for the `version`, you can
<<common-options-response-filtering, filter responses>>
using `filter_path` to limit the response to just the `version`:
[source,js]
[source,console]
--------------------------------------------------
GET /_ingest/pipeline/my-pipeline-id?filter_path=*.version
--------------------------------------------------
// CONSOLE
// TEST[continued]
This should give a small response that makes it both easy and inexpensive to parse:
@ -106,11 +102,10 @@ This should give a small response that makes it both easy and inexpensive to par
//////////////////////////
[source,js]
[source,console]
--------------------------------------------------
DELETE /_ingest/pipeline/my-pipeline-id
--------------------------------------------------
// CONSOLE
// TEST[continued]
[source,console-result]

View File

@ -3,7 +3,7 @@
The put pipeline API adds pipelines and updates existing pipelines in the cluster.
[source,js]
[source,console]
--------------------------------------------------
PUT _ingest/pipeline/my-pipeline-id
{
@ -18,15 +18,13 @@ PUT _ingest/pipeline/my-pipeline-id
]
}
--------------------------------------------------
// CONSOLE
//////////////////////////
[source,js]
[source,console]
--------------------------------------------------
DELETE /_ingest/pipeline/my-pipeline-id
--------------------------------------------------
// CONSOLE
// TEST[continued]
[source,console-result]

View File

@ -46,7 +46,7 @@ POST _ingest/pipeline/my-pipeline-id/_simulate
Here is an example of a simulate request with a pipeline defined in the request
and its response:
[source,js]
[source,console]
--------------------------------------------------
POST _ingest/pipeline/_simulate
{
@ -80,7 +80,6 @@ POST _ingest/pipeline/_simulate
]
}
--------------------------------------------------
// CONSOLE
Response:
@ -131,7 +130,7 @@ to the request.
Here is an example of a verbose request and its response:
[source,js]
[source,console]
--------------------------------------------------
POST _ingest/pipeline/_simulate?verbose
{
@ -171,7 +170,6 @@ POST _ingest/pipeline/_simulate?verbose
]
}
--------------------------------------------------
// CONSOLE
Response:

View File

@ -167,7 +167,7 @@ For example the following processor will <<drop-processor,drop>> the document
(i.e. not index it) if the input document has a field named `network_name`
and it is equal to `Guest`.
[source,js]
[source,console]
--------------------------------------------------
PUT _ingest/pipeline/drop_guests_network
{
@ -180,18 +180,16 @@ PUT _ingest/pipeline/drop_guests_network
]
}
--------------------------------------------------
// CONSOLE
Using that pipeline for an index request:
[source,js]
[source,console]
--------------------------------------------------
POST test/_doc/1?pipeline=drop_guests_network
{
"network_name" : "Guest"
}
--------------------------------------------------
// CONSOLE
// TEST[continued]
Results in nothing indexed since the conditional evaluated to `true`.
@ -226,7 +224,7 @@ To help protect against NullPointerExceptions, null safe operations should be us
Fortunately, Painless makes {painless}/painless-operators-reference.html#null-safe-operator[null safe]
operations easy with the `?.` operator.
[source,js]
[source,console]
--------------------------------------------------
PUT _ingest/pipeline/drop_guests_network
{
@ -239,11 +237,10 @@ PUT _ingest/pipeline/drop_guests_network
]
}
--------------------------------------------------
// CONSOLE
The following document will get <<drop-processor,dropped>> correctly:
[source,js]
[source,console]
--------------------------------------------------
POST test/_doc/1?pipeline=drop_guests_network
{
@ -252,30 +249,27 @@ POST test/_doc/1?pipeline=drop_guests_network
}
}
--------------------------------------------------
// CONSOLE
// TEST[continued]
Thanks to the `?.` operator the following document will not throw an error.
If the pipeline used a `.` the following document would throw a NullPointerException
since the `network` object is not part of the source document.
[source,js]
[source,console]
--------------------------------------------------
POST test/_doc/2?pipeline=drop_guests_network
{
"foo" : "bar"
}
--------------------------------------------------
// CONSOLE
// TEST[continued]
////
Hidden example assertion:
[source,js]
[source,console]
--------------------------------------------------
GET test/_doc/2
--------------------------------------------------
// CONSOLE
// TEST[continued]
[source,js]
@ -322,7 +316,7 @@ The source document may have the nested fields flattened as such:
If this is the case, use the <<dot-expand-processor, Dot Expand Processor>>
so that the nested fields may be used in a conditional.
[source,js]
[source,console]
--------------------------------------------------
PUT _ingest/pipeline/drop_guests_network
{
@ -340,18 +334,16 @@ PUT _ingest/pipeline/drop_guests_network
]
}
--------------------------------------------------
// CONSOLE
Now the following input document can be used with a conditional in the pipeline.
[source,js]
[source,console]
--------------------------------------------------
POST test/_doc/3?pipeline=drop_guests_network
{
"network.name": "Guest"
}
--------------------------------------------------
// CONSOLE
// TEST[continued]
The `?.` operators works well for use in the `if` conditional
@ -392,7 +384,7 @@ A more complex `if` condition that drops the document (i.e. not index it)
unless it has a multi-valued tag field with at least one value that contains the characters
`prod` (case insensitive).
[source,js]
[source,console]
--------------------------------------------------
PUT _ingest/pipeline/not_prod_dropper
{
@ -405,7 +397,6 @@ PUT _ingest/pipeline/not_prod_dropper
]
}
--------------------------------------------------
// CONSOLE
The conditional needs to be all on one line since JSON does not
support new line characters. However, Kibana's console supports
@ -438,14 +429,13 @@ PUT _ingest/pipeline/not_prod_dropper
// NOTCONSOLE
// TEST[continued]
[source,js]
[source,console]
--------------------------------------------------
POST test/_doc/1?pipeline=not_prod_dropper
{
"tags": ["application:myapp", "env:Stage"]
}
--------------------------------------------------
// CONSOLE
// TEST[continued]
The document is <<drop-processor,dropped>> since `prod` (case insensitive)
@ -454,23 +444,21 @@ is not found in the tags.
The following document is indexed (i.e. not dropped) since
`prod` (case insensitive) is found in the tags.
[source,js]
[source,console]
--------------------------------------------------
POST test/_doc/2?pipeline=not_prod_dropper
{
"tags": ["application:myapp", "env:Production"]
}
--------------------------------------------------
// CONSOLE
// TEST[continued]
////
Hidden example assertion:
[source,js]
[source,console]
--------------------------------------------------
GET test/_doc/2
--------------------------------------------------
// CONSOLE
// TEST[continued]
[source,js]
@ -509,7 +497,7 @@ The combination of the `if` conditional and the <<pipeline-processor>> can resul
yet powerful means to process heterogeneous input. For example, you can define a single pipeline
that delegates to other pipelines based on some criteria.
[source,js]
[source,console]
--------------------------------------------------
PUT _ingest/pipeline/logs_pipeline
{
@ -536,7 +524,6 @@ PUT _ingest/pipeline/logs_pipeline
]
}
--------------------------------------------------
// CONSOLE
The above example allows consumers to point to a single pipeline for all log based index requests.
Based on the conditional, the correct pipeline will be called to process that type of data.
@ -555,7 +542,8 @@ expressions in the `if` condition.
If regular expressions are enabled, operators such as `=~` can be used against a `/pattern/` for conditions.
For example:
[source,js]
[source,console]
--------------------------------------------------
PUT _ingest/pipeline/check_url
{
@ -570,9 +558,8 @@ PUT _ingest/pipeline/check_url
]
}
--------------------------------------------------
// CONSOLE
[source,js]
[source,console]
--------------------------------------------------
POST test/_doc/1?pipeline=check_url
{
@ -581,18 +568,16 @@ POST test/_doc/1?pipeline=check_url
}
}
--------------------------------------------------
// CONSOLE
// TEST[continued]
Results in:
////
Hidden example assertion:
[source,js]
[source,console]
--------------------------------------------------
GET test/_doc/1
--------------------------------------------------
// CONSOLE
// TEST[continued]
////
@ -623,7 +608,7 @@ alternatives exist.
For example in this case `startsWith` can be used to get the same result
without using a regular expression:
[source,js]
[source,console]
--------------------------------------------------
PUT _ingest/pipeline/check_url
{
@ -638,7 +623,6 @@ PUT _ingest/pipeline/check_url
]
}
--------------------------------------------------
// CONSOLE
[[handling-failure-in-pipelines]]
== Handling Failures in Pipelines

View File

@ -20,7 +20,7 @@ include::common-options.asciidoc[]
image:images/spatial/error_distance.png[]
[source,js]
[source,console]
--------------------------------------------------
PUT circles
{
@ -47,7 +47,6 @@ PUT _ingest/pipeline/polygonize_circles
]
}
--------------------------------------------------
// CONSOLE
Using the above pipeline, we can attempt to index a document into the `circles` index.
The circle can be represented as either a WKT circle or a GeoJSON circle. The resulting
@ -58,7 +57,7 @@ be translated to a WKT polygon, and GeoJSON circles will be translated to GeoJSO
In this example a circle defined in WKT format is indexed
[source,js]
[source,console]
--------------------------------------------------
PUT circles/_doc/1?pipeline=polygonize_circles
{
@ -67,7 +66,6 @@ PUT circles/_doc/1?pipeline=polygonize_circles
GET circles/_doc/1
--------------------------------------------------
// CONSOLE
// TEST[continued]
The response from the above index request:
@ -93,7 +91,7 @@ The response from the above index request:
In this example a circle defined in GeoJSON format is indexed
[source,js]
[source,console]
--------------------------------------------------
PUT circles/_doc/2?pipeline=polygonize_circles
{
@ -106,7 +104,6 @@ PUT circles/_doc/2?pipeline=polygonize_circles
GET circles/_doc/2
--------------------------------------------------
// CONSOLE
// TEST[continued]
The response from the above index request:

View File

@ -16,7 +16,7 @@ expression.
An example pipeline that points documents to a monthly index that starts with a `myindex-` prefix based on a
date in the `date1` field:
[source,js]
[source,console]
--------------------------------------------------
PUT _ingest/pipeline/monthlyindex
{
@ -32,19 +32,17 @@ PUT _ingest/pipeline/monthlyindex
]
}
--------------------------------------------------
// CONSOLE
Using that pipeline for an index request:
[source,js]
[source,console]
--------------------------------------------------
PUT /myindex/_doc/1?pipeline=monthlyindex
{
"date1" : "2016-04-25T12:02:01.789Z"
}
--------------------------------------------------
// CONSOLE
// TEST[continued]
[source,js]
@ -74,7 +72,7 @@ To see the date-math value of the index supplied in the actual index request whi
indexed into `myindex-2016-04-01` we can inspect the effects of the processor using a simulate request.
[source,js]
[source,console]
--------------------------------------------------
POST _ingest/pipeline/_simulate
{
@ -100,7 +98,6 @@ POST _ingest/pipeline/_simulate
]
}
--------------------------------------------------
// CONSOLE
and the result:

View File

@ -41,7 +41,7 @@ in `properties`.
Here is an example that uses the default city database and adds the geographical information to the `geoip` field based on the `ip` field:
[source,js]
[source,console]
--------------------------------------------------
PUT _ingest/pipeline/geoip
{
@ -60,7 +60,6 @@ PUT my_index/_doc/my_id?pipeline=geoip
}
GET my_index/_doc/my_id
--------------------------------------------------
// CONSOLE
Which returns:
@ -90,7 +89,7 @@ Here is an example that uses the default country database and adds the
geographical information to the `geo` field based on the `ip` field`. Note that
this database is included in the module. So this:
[source,js]
[source,console]
--------------------------------------------------
PUT _ingest/pipeline/geoip
{
@ -111,7 +110,6 @@ PUT my_index/_doc/my_id?pipeline=geoip
}
GET my_index/_doc/my_id
--------------------------------------------------
// CONSOLE
returns this:
@ -143,7 +141,7 @@ occurs, no `target_field` is inserted into the document.
Here is an example of what documents will be indexed as when information for "80.231.5.0"
cannot be found:
[source,js]
[source,console]
--------------------------------------------------
PUT _ingest/pipeline/geoip
{
@ -164,7 +162,6 @@ PUT my_index/_doc/my_id?pipeline=geoip
GET my_index/_doc/my_id
--------------------------------------------------
// CONSOLE
Which returns:
@ -194,7 +191,7 @@ as such in the mapping.
You can use the following mapping for the example index above:
[source,js]
[source,console]
--------------------------------------------------
PUT my_ip_locations
{
@ -209,10 +206,9 @@ PUT my_ip_locations
}
}
--------------------------------------------------
// CONSOLE
////
[source,js]
[source,console]
--------------------------------------------------
PUT _ingest/pipeline/geoip
{
@ -251,7 +247,6 @@ GET /my_ip_locations/_search
}
}
--------------------------------------------------
// CONSOLE
// TEST[continued]
[source,js]

View File

@ -155,7 +155,7 @@ the same `or` behavior.
Here is an example of such a configuration executed against the simulate API:
[source,js]
[source,console]
--------------------------------------------------
POST _ingest/pipeline/_simulate
{
@ -183,7 +183,6 @@ POST _ingest/pipeline/_simulate
]
}
--------------------------------------------------
// CONSOLE
response:
@ -216,7 +215,7 @@ that same pipeline, but with `"trace_match": true` configured:
////
Hidden setup for example:
[source,js]
[source,console]
--------------------------------------------------
POST _ingest/pipeline/_simulate
{
@ -245,7 +244,6 @@ POST _ingest/pipeline/_simulate
]
}
--------------------------------------------------
// CONSOLE
////
[source,js]
@ -283,11 +281,10 @@ metadata and will not be indexed.
The Grok Processor comes packaged with its own REST endpoint for retrieving which patterns the processor is packaged with.
[source,js]
[source,console]
--------------------------------------------------
GET _ingest/processor/grok
--------------------------------------------------
// CONSOLE
The above request will return a response body containing a key-value representation of the built-in patterns dictionary.

View File

@ -25,7 +25,7 @@ An example of using this processor for nesting pipelines would be:
Define an inner pipeline:
[source,js]
[source,console]
--------------------------------------------------
PUT _ingest/pipeline/pipelineA
{
@ -40,11 +40,10 @@ PUT _ingest/pipeline/pipelineA
]
}
--------------------------------------------------
// CONSOLE
Define another pipeline that uses the previously defined inner pipeline:
[source,js]
[source,console]
--------------------------------------------------
PUT _ingest/pipeline/pipelineB
{
@ -64,20 +63,18 @@ PUT _ingest/pipeline/pipelineB
]
}
--------------------------------------------------
// CONSOLE
// TEST[continued]
Now indexing a document while applying the outer pipeline will see the inner pipeline executed
from the outer pipeline:
[source,js]
[source,console]
--------------------------------------------------
PUT /myindex/_doc/1?pipeline=pipelineB
{
"field": "value"
}
--------------------------------------------------
// CONSOLE
// TEST[continued]
Response from the index request:

View File

@ -46,7 +46,7 @@ It is possible to use the Script Processor to manipulate document metadata like
ingestion. Here is an example of an Ingest Pipeline that renames the index and type to `my_index` no matter what
was provided in the original index request:
[source,js]
[source,console]
--------------------------------------------------
PUT _ingest/pipeline/my_index
{
@ -63,18 +63,16 @@ PUT _ingest/pipeline/my_index
]
}
--------------------------------------------------
// CONSOLE
Using the above pipeline, we can attempt to index a document into the `any_index` index.
[source,js]
[source,console]
--------------------------------------------------
PUT any_index/_doc/1?pipeline=my_index
{
"message": "text"
}
--------------------------------------------------
// CONSOLE
// TEST[continued]
The response from the above index request:

View File

@ -28,7 +28,7 @@ include::common-options.asciidoc[]
This processor can also be used to copy data from one field to another. For example:
[source,js]
[source,console]
--------------------------------------------------
PUT _ingest/pipeline/set_os
{
@ -54,7 +54,6 @@ POST _ingest/pipeline/set_os/_simulate
]
}
--------------------------------------------------
// CONSOLE
Result:
[source,js]

View File

@ -24,7 +24,7 @@ The ingest-user-agent module ships by default with the regexes.yaml made availab
Here is an example that adds the user agent details to the `user_agent` field based on the `agent` field:
[source,js]
[source,console]
--------------------------------------------------
PUT _ingest/pipeline/user_agent
{
@ -43,7 +43,6 @@ PUT my_index/_doc/my_id?pipeline=user_agent
}
GET my_index/_doc/my_id
--------------------------------------------------
// CONSOLE
Which returns

View File

@ -31,11 +31,10 @@ For more information, see
The following example queries the info API:
[source,js]
[source,console]
------------------------------------------------------------
DELETE /_license
------------------------------------------------------------
// CONSOLE
// TEST[skip:license testing issues]
When the license is successfully deleted, the API returns the following response:

View File

@ -33,11 +33,10 @@ For more information, see
The following example checks whether you are eligible to start a basic:
[source,js]
[source,console]
------------------------------------------------------------
GET /_license/basic_status
------------------------------------------------------------
// CONSOLE
Example response:
[source,js]

View File

@ -44,11 +44,10 @@ For more information, see
The following example provides information about a trial license:
[source,js]
[source,console]
--------------------------------------------------
GET /_license
--------------------------------------------------
// CONSOLE
[source,js]
--------------------------------------------------

View File

@ -39,11 +39,10 @@ For more information, see
The following example checks whether you are eligible to start a trial:
[source,js]
[source,console]
------------------------------------------------------------
GET /_license/trial_status
------------------------------------------------------------
// CONSOLE
Example response:
[source,js]

View File

@ -39,11 +39,10 @@ For more information, see
The following example starts a basic license if you do not currently have a license:
[source,js]
[source,console]
------------------------------------------------------------
POST /_license/start_basic
------------------------------------------------------------
// CONSOLE
// TEST[skip:license testing issues]
Example response:
@ -60,11 +59,10 @@ The following example starts a basic license if you currently have a license wit
features than a basic license. As you are losing features, you must pass the acknowledge
parameter:
[source,js]
[source,console]
------------------------------------------------------------
POST /_license/start_basic?acknowledge=true
------------------------------------------------------------
// CONSOLE
// TEST[skip:license testing issues]
Example response:

View File

@ -43,11 +43,10 @@ For more information, see
The following example starts a 30-day trial license. The acknowledge
parameter is required as you are initiating a license that will expire.
[source,js]
[source,console]
------------------------------------------------------------
POST /_license/start_trial?acknowledge=true
------------------------------------------------------------
// CONSOLE
// TEST[skip:license testing issues]
Example response:

View File

@ -55,7 +55,7 @@ install the license. See <<configuring-tls>>.
The following example updates to a basic license:
[source,js]
[source,console]
------------------------------------------------------------
POST /_license
{
@ -73,7 +73,6 @@ POST /_license
]
}
------------------------------------------------------------
// CONSOLE
// TEST[skip:license testing issues]
NOTE: These values are invalid; you must substitute the appropriate content
@ -132,7 +131,7 @@ receive the following response:
To complete the update, you must re-submit the API request and set the
`acknowledge` parameter to `true`. For example:
[source,js]
[source,console]
------------------------------------------------------------
POST /_license?acknowledge=true
{
@ -150,7 +149,6 @@ POST /_license?acknowledge=true
]
}
------------------------------------------------------------
// CONSOLE
// TEST[skip:license testing issues]
Alternatively:

View File

@ -129,7 +129,7 @@ You can create field mappings when you <<create-mapping,create an index>> and
You can use the <<indices-create-index,create index>> API to create a new index
with an explicit mapping.
[source,js]
[source,console]
----
PUT /my-index
{
@ -142,7 +142,6 @@ PUT /my-index
}
}
----
// CONSOLE
<1> Creates `age`, an <<number,`integer`>> field
<2> Creates `email`, a <<keyword,`keyword`>> field
@ -159,7 +158,7 @@ The following example adds `employee-id`, a `keyword` field with an
<<mapping-index,`index`>> mapping parameter value of `false`. This means values
for the `employee-id` field are stored but not indexed or available for search.
[source,js]
[source,console]
----
PUT /my-index/_mapping
{
@ -171,7 +170,6 @@ PUT /my-index/_mapping
}
}
----
// CONSOLE
// TEST[continued]
[float]
@ -187,11 +185,10 @@ include::{docdir}/indices/put-mapping.asciidoc[tag=put-field-mapping-exceptions]
You can use the <<indices-get-mapping, get mapping>> API to view the mapping of
an existing index.
[source,js]
[source,console]
----
GET /my-index/_mapping
----
// CONSOLE
// TEST[continued]
The API returns the following response:
@ -234,11 +231,10 @@ contains a large number of fields.
The following request retrieves the mapping for the `employee-id` field.
[source,js]
[source,console]
----
GET /my-index/_mapping/field/employee-id
----
// CONSOLE
// TEST[continued]
The API returns the following response:

View File

@ -7,12 +7,12 @@ To index a document, you don't have to first create an index, define a mapping
type, and define your fields -- you can just index a document and the index,
type, and fields will spring to life automatically:
[source,js]
[source,console]
--------------------------------------------------
PUT data/_doc/1 <1>
{ "count": 5 }
--------------------------------------------------
// CONSOLE
<1> Creates the `data` index, the `_doc` mapping type, and a field
called `count` with datatype `long`.

View File

@ -46,7 +46,7 @@ The default value for `dynamic_date_formats` is:
For example:
[source,js]
[source,console]
--------------------------------------------------
PUT my_index/_doc/1
{
@ -55,7 +55,7 @@ PUT my_index/_doc/1
GET my_index/_mapping <1>
--------------------------------------------------
// CONSOLE
<1> The `create_date` field has been added as a <<date,`date`>>
field with the <<mapping-date-format,`format`>>: +
`"yyyy/MM/dd HH:mm:ss Z||yyyy/MM/dd Z"`.
@ -64,7 +64,7 @@ GET my_index/_mapping <1>
Dynamic date detection can be disabled by setting `date_detection` to `false`:
[source,js]
[source,console]
--------------------------------------------------
PUT my_index
{
@ -78,7 +78,6 @@ PUT my_index/_doc/1 <1>
"create": "2015/09/02"
}
--------------------------------------------------
// CONSOLE
<1> The `create_date` field has been added as a <<text,`text`>> field.
@ -87,7 +86,7 @@ PUT my_index/_doc/1 <1>
Alternatively, the `dynamic_date_formats` can be customised to support your
own <<mapping-date-format,date formats>>:
[source,js]
[source,console]
--------------------------------------------------
PUT my_index
{
@ -101,7 +100,6 @@ PUT my_index/_doc/1
"create_date": "09/25/2015"
}
--------------------------------------------------
// CONSOLE
[[numeric-detection]]
@ -113,7 +111,7 @@ correct solution is to map these fields explicitly, but numeric detection
(which is disabled by default) can be enabled to do this automatically:
[source,js]
[source,console]
--------------------------------------------------
PUT my_index
{
@ -128,7 +126,7 @@ PUT my_index/_doc/1
"my_integer": "1" <2>
}
--------------------------------------------------
// CONSOLE
<1> The `my_float` field is added as a <<number,`float`>> field.
<2> The `my_integer` field is added as a <<number,`long`>> field.

View File

@ -67,7 +67,7 @@ For example, if we wanted to map all integer fields as `integer` instead of
`long`, and all `string` fields as both `text` and `keyword`, we
could use the following template:
[source,js]
[source,console]
--------------------------------------------------
PUT my_index
{
@ -105,7 +105,7 @@ PUT my_index/_doc/1
"my_string": "Some string" <2>
}
--------------------------------------------------
// CONSOLE
<1> The `my_integer` field is mapped as an `integer`.
<2> The `my_string` field is mapped as a `text`, with a `keyword` <<multi-fields,multi field>>.
@ -121,7 +121,7 @@ The following example matches all `string` fields whose name starts with
fields:
[source,js]
[source,console]
--------------------------------------------------
PUT my_index
{
@ -147,7 +147,7 @@ PUT my_index/_doc/1
"long_text": "foo" <2>
}
--------------------------------------------------
// CONSOLE
<1> The `long_num` field is mapped as a `long`.
<2> The `long_text` field uses the default `string` mapping.
@ -175,7 +175,7 @@ final name, e.g. `some_object.*.some_field`.
This example copies the values of any fields in the `name` object to the
top-level `full_name` field, except for the `middle` field:
[source,js]
[source,console]
--------------------------------------------------
PUT my_index
{
@ -204,14 +204,13 @@ PUT my_index/_doc/1
}
}
--------------------------------------------------
// CONSOLE
Note that the `path_match` and `path_unmatch` parameters match on object paths
in addition to leaf fields. As an example, indexing the following document will
result in an error because the `path_match` setting also matches the object
field `name.title`, which can't be mapped as text:
[source,js]
[source,console]
--------------------------------------------------
PUT my_index/_doc/2
{
@ -225,7 +224,6 @@ PUT my_index/_doc/2
}
}
--------------------------------------------------
// CONSOLE
// TEST[continued]
// TEST[catch:bad_request]
@ -237,7 +235,7 @@ with the field name and detected dynamic type. The following example sets all
string fields to use an <<analyzer,`analyzer`>> with the same name as the
field, and disables <<doc-values,`doc_values`>> for all non-string fields:
[source,js]
[source,console]
--------------------------------------------------
PUT my_index
{
@ -272,7 +270,7 @@ PUT my_index/_doc/1
"count": 5 <2>
}
--------------------------------------------------
// CONSOLE
<1> The `english` field is mapped as a `string` field with the `english` analyzer.
<2> The `count` field is mapped as a `long` field with `doc_values` disabled.
@ -289,7 +287,7 @@ interested in full text search, you can make Elasticsearch map your fields
only as `keyword`s. Note that this means that in order to search those fields,
you will have to search on the exact same value that was indexed.
[source,js]
[source,console]
--------------------------------------------------
PUT my_index
{
@ -307,7 +305,6 @@ PUT my_index
}
}
--------------------------------------------------
// CONSOLE
[[text-only-mappings-strings]]
===== `text`-only mappings for strings
@ -318,7 +315,7 @@ aggregations, sorting or exact search on your string fields, you could tell
Elasticsearch to map it only as a text field (which was the default behaviour
before 5.0):
[source,js]
[source,console]
--------------------------------------------------
PUT my_index
{
@ -336,7 +333,6 @@ PUT my_index
}
}
--------------------------------------------------
// CONSOLE
===== Disabled norms
@ -344,7 +340,7 @@ Norms are index-time scoring factors. If you do not care about scoring, which
would be the case for instance if you never sort documents by score, you could
disable the storage of these scoring factors in the index and save some space.
[source,js]
[source,console]
--------------------------------------------------
PUT my_index
{
@ -369,7 +365,6 @@ PUT my_index
}
}
--------------------------------------------------
// CONSOLE
The sub `keyword` field appears in this template to be consistent with the
default rules of dynamic mappings. Of course if you do not need them because
@ -383,7 +378,7 @@ numeric fields that you will often aggregate on but never filter on. In such a
case, you could disable indexing on those fields to save disk space and also
maybe gain some indexing speed:
[source,js]
[source,console]
--------------------------------------------------
PUT my_index
{
@ -411,7 +406,7 @@ PUT my_index
}
}
--------------------------------------------------
// CONSOLE
<1> Like the default dynamic mapping rules, doubles are mapped as floats, which
are usually accurate enough, yet require half the disk space.

View File

@ -20,7 +20,7 @@ which have `doc_values` and `norms` disabled and you do not need to
execute `exists` queries using those fields you might want to disable
`_field_names` be adding the following to the mappings:
[source,js]
[source,console]
--------------------------------------------------
PUT tweets
{
@ -31,4 +31,3 @@ PUT tweets
}
}
--------------------------------------------------
// CONSOLE

View File

@ -8,7 +8,7 @@ so that documents can be looked up either with the <<docs-get,GET API>> or the
The value of the `_id` field is accessible in certain queries (`term`,
`terms`, `match`, `query_string`, `simple_query_string`).
[source,js]
[source,console]
--------------------------
# Example documents
PUT my_index/_doc/1
@ -30,7 +30,6 @@ GET my_index/_search
}
}
--------------------------
// CONSOLE
<1> Querying on the `_id` field (also see the <<query-dsl-ids-query,`ids` query>>)

View File

@ -14,7 +14,7 @@ queries, and is returned as part of the search hits.
For instance the below query matches all documents that have one or more fields
that got ignored:
[source,js]
[source,console]
--------------------------------------------------
GET _search
{
@ -25,12 +25,11 @@ GET _search
}
}
--------------------------------------------------
// CONSOLE
Similarly, the below query finds all documents whose `@timestamp` field was
ignored at index time:
[source,js]
[source,console]
--------------------------------------------------
GET _search
{
@ -41,5 +40,3 @@ GET _search
}
}
--------------------------------------------------
// CONSOLE

View File

@ -13,7 +13,7 @@ in a `term` or `terms` query (or any query that is rewritten to a `term`
query, such as the `match`, `query_string` or `simple_query_string` query),
but it does not support `prefix`, `wildcard`, `regexp`, or `fuzzy` queries.
[source,js]
[source,console]
--------------------------
# Example documents
PUT index_1/_doc/1
@ -58,7 +58,6 @@ GET index_1,index_2/_search
}
}
--------------------------
// CONSOLE
<1> Querying on the `_index` field
<2> Aggregating on the `_index` field

View File

@ -5,7 +5,7 @@ A mapping type can have custom meta data associated with it. These are not
used at all by Elasticsearch, but can be used to store application-specific
metadata, such as the class that a document belongs to:
[source,js]
[source,console]
--------------------------------------------------
PUT my_index
{
@ -20,14 +20,14 @@ PUT my_index
}
}
--------------------------------------------------
// CONSOLE
<1> This `_meta` info can be retrieved with the
<<indices-get-mapping,GET mapping>> API.
The `_meta` field can be updated on an existing type using the
<<indices-put-mapping,PUT mapping>> API:
[source,js]
[source,console]
--------------------------------------------------
PUT my_index/_mapping
{
@ -40,5 +40,4 @@ PUT my_index/_mapping
}
}
--------------------------------------------------
// CONSOLE
// TEST[continued]

View File

@ -11,7 +11,7 @@ The default value used for `_routing` is the document's <<mapping-id-field,`_id`
Custom routing patterns can be implemented by specifying a custom `routing`
value per document. For instance:
[source,js]
[source,console]
------------------------------
PUT my_index/_doc/1?routing=user1&refresh=true <1>
{
@ -20,7 +20,6 @@ PUT my_index/_doc/1?routing=user1&refresh=true <1>
GET my_index/_doc/1?routing=user1 <2>
------------------------------
// CONSOLE
// TESTSETUP
<1> This document uses `user1` as its routing value, instead of its ID.
@ -30,7 +29,7 @@ GET my_index/_doc/1?routing=user1 <2>
The value of the `_routing` field is accessible in queries:
[source,js]
[source,console]
--------------------------
GET my_index/_search
{
@ -41,7 +40,6 @@ GET my_index/_search
}
}
--------------------------
// CONSOLE
<1> Querying on the `_routing` field (also see the <<query-dsl-ids-query,`ids` query>>)
@ -51,7 +49,7 @@ Custom routing can reduce the impact of searches. Instead of having to fan
out a search request to all the shards in an index, the request can be sent to
just the shard that matches the specific routing value (or values):
[source,js]
[source,console]
------------------------------
GET my_index/_search?routing=user1,user2 <1>
{
@ -62,7 +60,6 @@ GET my_index/_search?routing=user1,user2 <1>
}
}
------------------------------
// CONSOLE
<1> This search request will only be executed on the shards associated with the `user1` and `user2` routing values.
@ -77,7 +74,7 @@ Forgetting the routing value can lead to a document being indexed on more than
one shard. As a safeguard, the `_routing` field can be configured to make a
custom `routing` value required for all CRUD operations:
[source,js]
[source,console]
------------------------------
PUT my_index2
{
@ -93,8 +90,8 @@ PUT my_index2/_doc/1 <2>
"text": "No routing value provided"
}
------------------------------
// CONSOLE
// TEST[catch:bad_request]
<1> Routing is required for `_doc` documents.
<2> This index request throws a `routing_missing_exception`.

View File

@ -12,7 +12,7 @@ _fetch_ requests, like <<docs-get,get>> or <<search-search,search>>.
Though very handy to have around, the source field does incur storage overhead
within the index. For this reason, it can be disabled as follows:
[source,js]
[source,console]
--------------------------------------------------
PUT tweets
{
@ -23,7 +23,6 @@ PUT tweets
}
}
--------------------------------------------------
// CONSOLE
[WARNING]
.Think before disabling the `_source` field
@ -82,7 +81,7 @@ Elasticsearch index to another. Consider using
The `includes`/`excludes` parameters (which also accept wildcards) can be used
as follows:
[source,js]
[source,console]
--------------------------------------------------
PUT logs
{
@ -125,7 +124,6 @@ GET logs/_search
}
}
--------------------------------------------------
// CONSOLE
<1> These fields will be removed from the stored `_source` field.
<2> We can still search on this field, even though it is not in the stored `_source`.

View File

@ -10,7 +10,7 @@ indexed in order to make searching by type name fast.
The value of the `_type` field is accessible in queries, aggregations,
scripts, and when sorting:
[source,js]
[source,console]
--------------------------
# Example documents
@ -52,7 +52,6 @@ GET my_index/_search
}
--------------------------
// CONSOLE
<1> Querying on the `_type` field
<2> Aggregating on the `_type` field

View File

@ -39,7 +39,7 @@ At query time, there are a few more layers:
The easiest way to specify an analyzer for a particular field is to define it
in the field mapping, as follows:
[source,js]
[source,console]
--------------------------------------------------
PUT /my_index
{
@ -70,7 +70,7 @@ GET my_index/_analyze <4>
"text": "The quick Brown Foxes."
}
--------------------------------------------------
// CONSOLE
<1> The `text` field uses the default `standard` analyzer`.
<2> The `text.english` <<multi-fields,multi-field>> uses the `english` analyzer, which removes stop words and applies stemming.
<3> This returns the tokens: [ `the`, `quick`, `brown`, `foxes` ].
@ -89,7 +89,7 @@ To disable stop words for phrases a field utilising three analyzer settings will
2. A `search_analyzer` setting for non-phrase queries that will remove stop words
3. A `search_quote_analyzer` setting for phrase queries that will not remove stop words
[source,js]
[source,console]
--------------------------------------------------
PUT my_index
{
@ -151,7 +151,7 @@ GET my_index/_search
}
}
--------------------------------------------------
// CONSOLE
<1> `my_analyzer` analyzer which tokens all terms including stop words
<2> `my_stop_analyzer` analyzer which removes stop words
<3> `analyzer` setting that points to the `my_analyzer` analyzer which will be used at index time

View File

@ -4,7 +4,7 @@
Individual fields can be _boosted_ automatically -- count more towards the relevance score
-- at query time, with the `boost` parameter as follows:
[source,js]
[source,console]
--------------------------------------------------
PUT my_index
{
@ -21,7 +21,6 @@ PUT my_index
}
}
--------------------------------------------------
// CONSOLE
<1> Matches on the `title` field will have twice the weight as those on the
`content` field, which has the default `boost` of `1.0`.
@ -30,7 +29,7 @@ NOTE: The boost is applied only for term queries (prefix, range and fuzzy querie
You can achieve the same effect by using the boost parameter directly in the query, for instance the following query (with field time boost):
[source,js]
[source,console]
--------------------------------------------------
POST _search
{
@ -43,11 +42,10 @@ POST _search
}
}
--------------------------------------------------
// CONSOLE
is equivalent to:
[source,js]
[source,console]
--------------------------------------------------
POST _search
{
@ -61,7 +59,6 @@ POST _search
}
}
--------------------------------------------------
// CONSOLE
deprecated[5.0.0, "Index time boost is deprecated. Instead, the field mapping boost is applied at query time. For indices created before 5.0.0, the boost will still be applied at index time."]

View File

@ -15,7 +15,7 @@ For instance:
For instance:
[source,js]
[source,console]
--------------------------------------------------
PUT my_index
{
@ -42,8 +42,8 @@ PUT my_index/_doc/2
"number_two": "10" <2>
}
--------------------------------------------------
// CONSOLE
// TEST[catch:bad_request]
<1> The `number_one` field will contain the integer `10`.
<2> This document will be rejected because coercion is disabled.
@ -56,7 +56,7 @@ using the <<indices-put-mapping,PUT mapping API>>.
The `index.mapping.coerce` setting can be set on the index level to disable
coercion globally across all mapping types:
[source,js]
[source,console]
--------------------------------------------------
PUT my_index
{
@ -82,7 +82,7 @@ PUT my_index/_doc/1
PUT my_index/_doc/2
{ "number_two": "10" } <2>
--------------------------------------------------
// CONSOLE
// TEST[catch:bad_request]
<1> The `number_one` field overrides the index level setting to enable coercion.
<2> This document will be rejected because the `number_two` field inherits the index-level coercion setting.

View File

@ -6,7 +6,7 @@ fields into a group field, which can then be queried as a single
field. For instance, the `first_name` and `last_name` fields can be copied to
the `full_name` field as follows:
[source,js]
[source,console]
--------------------------------------------------
PUT my_index
{
@ -46,7 +46,7 @@ GET my_index/_search
}
--------------------------------------------------
// CONSOLE
<1> The values of the `first_name` and `last_name` fields are copied to the
`full_name` field.

View File

@ -21,7 +21,7 @@ All fields which support doc values have them enabled by default. If you are
sure that you don't need to sort or aggregate on a field, or access the field
value from a script, you can disable doc values in order to save disk space:
[source,js]
[source,console]
--------------------------------------------------
PUT my_index
{
@ -38,7 +38,7 @@ PUT my_index
}
}
--------------------------------------------------
// CONSOLE
<1> The `status_code` field has `doc_values` enabled by default.
<2> The `session_id` has `doc_values` disabled, but can still be queried.

View File

@ -5,7 +5,7 @@ By default, fields can be added _dynamically_ to a document, or to
<<object,inner objects>> within a document, just by indexing a document
containing the new field. For instance:
[source,js]
[source,console]
--------------------------------------------------
PUT my_index/_doc/1 <1>
{
@ -31,7 +31,7 @@ PUT my_index/_doc/2 <3>
GET my_index/_mapping <4>
--------------------------------------------------
// CONSOLE
<1> This document introduces the string field `username`, the object field
`name`, and two string fields under the `name` object which can be
referred to as `name.first` and `name.last`.
@ -56,7 +56,7 @@ The `dynamic` setting may be set at the mapping type level, and on each
<<object,inner object>>. Inner objects inherit the setting from their parent
object or from the mapping type. For instance:
[source,js]
[source,console]
--------------------------------------------------
PUT my_index
{
@ -78,7 +78,7 @@ PUT my_index
}
}
--------------------------------------------------
// CONSOLE
<1> Dynamic mapping is disabled at the type level, so no new top-level fields will be added dynamically.
<2> The `user` object inherits the type-level setting.
<3> The `user.social_networks` object enables dynamic mapping, so new fields may be added to this inner object.

View File

@ -34,7 +34,7 @@ interested in search speed, it could be beneficial to set
`eager_global_ordinals: true` on fields that you plan to use in terms
aggregations:
[source,js]
[source,console]
------------
PUT my_index/_mapping
{
@ -46,7 +46,6 @@ PUT my_index/_mapping
}
}
------------
// CONSOLE
// TEST[s/^/PUT my_index\n/]
This will shift the cost of building the global ordinals from search-time to
@ -73,7 +72,7 @@ If you ever decide that you do not need to run `terms` aggregations on this
field anymore, then you can disable eager loading of global ordinals at any
time:
[source,js]
[source,console]
------------
PUT my_index/_mapping
{
@ -85,6 +84,4 @@ PUT my_index/_mapping
}
}
------------
// CONSOLE
// TEST[continued]

View File

@ -13,7 +13,7 @@ parsing of the contents of the field entirely. The JSON can still be retrieved
from the <<mapping-source-field,`_source`>> field, but it is not searchable or
stored in any other way:
[source,js]
[source,console]
--------------------------------------------------
PUT my_index
{
@ -51,7 +51,7 @@ PUT my_index/_doc/session_2
"last_updated": "2015-12-06T18:22:13"
}
--------------------------------------------------
// CONSOLE
<1> The `session_data` field is disabled.
<2> Any arbitrary data can be passed to the `session_data` field as it will be entirely ignored.
<3> The `session_data` will also ignore values that are not JSON objects.
@ -60,7 +60,7 @@ The entire mapping may be disabled as well, in which case the document is
stored in the <<mapping-source-field,`_source`>> field, which means it can be
retrieved, but none of its contents are indexed in any way:
[source,js]
[source,console]
--------------------------------------------------
PUT my_index
{
@ -84,7 +84,7 @@ GET my_index/_doc/session_1 <2>
GET my_index/_mapping <3>
--------------------------------------------------
// CONSOLE
<1> The entire mapping is disabled.
<2> The document can be retrieved.
<3> Checking the mapping reveals that no fields have been added.
@ -94,7 +94,8 @@ definition cannot be updated.
Note that because Elasticsearch completely skips parsing the field
contents, it is possible to add non-object data to a disabled field:
[source,js]
[source,console]
--------------------------------------------------
PUT my_index
{
@ -113,6 +114,5 @@ PUT my_index/_doc/session_1
"session_data": "foo bar" <1>
}
--------------------------------------------------
// CONSOLE
<1> The document is added successfully, even though `session_data` contains non-object data.

View File

@ -54,7 +54,7 @@ Instead, you should have a `text` field for full text searches, and an
unanalyzed <<keyword,`keyword`>> field with <<doc-values,`doc_values`>>
enabled for aggregations, as follows:
[source,js]
[source,console]
---------------------------------
PUT my_index
{
@ -72,7 +72,7 @@ PUT my_index
}
}
---------------------------------
// CONSOLE
<1> Use the `my_field` field for searches.
<2> Use the `my_field.keyword` field for aggregations, sorting, or in scripts.
@ -82,7 +82,7 @@ PUT my_index
You can enable fielddata on an existing `text` field using the
<<indices-put-mapping,PUT mapping API>> as follows:
[source,js]
[source,console]
-----------------------------------
PUT my_index/_mapping
{
@ -94,7 +94,6 @@ PUT my_index/_mapping
}
}
-----------------------------------
// CONSOLE
// TEST[continued]
<1> The mapping that you specify for `my_field` should consist of the existing
@ -116,7 +115,7 @@ value for the field, as opposed to all docs in the segment.
Small segments can be excluded completely by specifying the minimum
number of docs that the segment should contain with `min_segment_size`:
[source,js]
[source,console]
--------------------------------------------------
PUT my_index
{
@ -135,4 +134,3 @@ PUT my_index
}
}
--------------------------------------------------
// CONSOLE

View File

@ -9,7 +9,7 @@ Besides the <<built-in-date-formats,built-in formats>>, your own
<<custom-date-formats,custom formats>> can be specified using the familiar
`yyyy/MM/dd` syntax:
[source,js]
[source,console]
--------------------------------------------------
PUT my_index
{
@ -23,7 +23,6 @@ PUT my_index
}
}
--------------------------------------------------
// CONSOLE
Many APIs which support date values also support <<date-math,date math>>
expressions, such as `now-1m/d` -- the current time, minus one month, rounded

View File

@ -6,7 +6,7 @@ For arrays of strings, `ignore_above` will be applied for each array element sep
NOTE: All strings/array elements will still be present in the `_source` field, if the latter is enabled which is the default in Elasticsearch.
[source,js]
[source,console]
--------------------------------------------------
PUT my_index
{
@ -41,7 +41,7 @@ GET my_index/_search <4>
}
}
--------------------------------------------------
// CONSOLE
<1> This field will ignore any string longer than 20 characters.
<2> This document is indexed successfully.
<3> This document will be indexed, but without indexing the `message` field.

View File

@ -12,7 +12,7 @@ indexed, but other fields in the document are processed normally.
For example:
[source,js]
[source,console]
--------------------------------------------------
PUT my_index
{
@ -41,8 +41,8 @@ PUT my_index/_doc/2
"number_two": "foo" <2>
}
--------------------------------------------------
// CONSOLE
// TEST[catch:bad_request]
<1> This document will have the `text` field indexed, but not the `number_one` field.
<2> This document will be rejected because `number_two` does not allow malformed values.
@ -56,7 +56,7 @@ existing fields using the <<indices-put-mapping,PUT mapping API>>.
The `index.mapping.ignore_malformed` setting can be set on the index level to
allow to ignore malformed content globally across all mapping types.
[source,js]
[source,console]
--------------------------------------------------
PUT my_index
{
@ -76,7 +76,6 @@ PUT my_index
}
}
--------------------------------------------------
// CONSOLE
<1> The `number_one` field inherits the index-level setting.
<2> The `number_two` field overrides the index-level setting to turn off `ignore_malformed`.

View File

@ -33,7 +33,7 @@ NOTE: <<number,Numeric fields>> don't support the `index_options` parameter any
<<mapping-index,Analyzed>> string fields use `positions` as the default, and
all other fields use `docs` as the default.
[source,js]
[source,console]
--------------------------------------------------
PUT my_index
{
@ -66,5 +66,5 @@ GET my_index/_search
}
}
--------------------------------------------------
// CONSOLE
<1> The `text` field will use the postings for the highlighting by default because `offsets` are indexed.

View File

@ -17,7 +17,7 @@ up prefix searches. It accepts the following optional settings:
This example creates a text field using the default prefix length settings:
[source,js]
[source,console]
--------------------------------
PUT my_index
{
@ -31,14 +31,13 @@ PUT my_index
}
}
--------------------------------
// CONSOLE
<1> An empty settings object will use the default `min_chars` and `max_chars`
settings
This example uses custom prefix length settings:
[source,js]
[source,console]
--------------------------------
PUT my_index
{
@ -55,4 +54,3 @@ PUT my_index
}
}
--------------------------------
// CONSOLE

View File

@ -6,7 +6,7 @@ purposes. This is the purpose of _multi-fields_. For instance, a `string`
field could be mapped as a `text` field for full-text
search, and as a `keyword` field for sorting or aggregations:
[source,js]
[source,console]
--------------------------------------------------
PUT my_index
{
@ -53,7 +53,7 @@ GET my_index/_search
}
}
--------------------------------------------------
// CONSOLE
<1> The `city.raw` field is a `keyword` version of the `city` field.
<2> The `city` field can be used for full text search.
<3> The `city.raw` field can be used for sorting and aggregations
@ -71,7 +71,7 @@ ways for better relevance. For instance we could index a field with the
words, and again with the <<english-analyzer,`english` analyzer>>
which stems words into their root form:
[source,js]
[source,console]
--------------------------------------------------
PUT my_index
{
@ -110,7 +110,6 @@ GET my_index/_search
}
}
--------------------------------------------------
// CONSOLE
<1> The `text` field uses the `standard` analyzer.
<2> The `text.english` field uses the `english` analyzer.

View File

@ -10,7 +10,7 @@ search-time when the `keyword` field is searched via a query parser such as
the <<query-dsl-match-query,`match`>> query or via a term-level query
such as the <<query-dsl-term-query,`term`>> query.
[source,js]
[source,console]
--------------------------------
PUT index
{
@ -70,7 +70,6 @@ GET index/_search
}
}
--------------------------------
// CONSOLE
The above queries match documents 1 and 2 since `BÀR` is converted to `bar` at
both index and query time.
@ -120,7 +119,7 @@ both index and query time.
Also, the fact that keywords are converted prior to indexing also means that
aggregations return normalized values:
[source,js]
[source,console]
----------------------------
GET index/_search
{
@ -134,7 +133,6 @@ GET index/_search
}
}
----------------------------
// CONSOLE
// TEST[continued]
returns

View File

@ -17,7 +17,7 @@ the <<indices-put-mapping,PUT mapping API>>.
Norms can be disabled (but not reenabled after the fact), using the
<<indices-put-mapping,PUT mapping API>> like so:
[source,js]
[source,console]
------------
PUT my_index/_mapping
{
@ -29,7 +29,6 @@ PUT my_index/_mapping
}
}
------------
// CONSOLE
// TEST[s/^/PUT my_index\n/]
NOTE: Norms will not be removed instantly, but will be removed as old segments

View File

@ -8,7 +8,7 @@ field has no values.
The `null_value` parameter allows you to replace explicit `null` values with
the specified value so that it can be indexed and searched. For instance:
[source,js]
[source,console]
--------------------------------------------------
PUT my_index
{
@ -41,7 +41,7 @@ GET my_index/_search
}
}
--------------------------------------------------
// CONSOLE
<1> Replace explicit `null` values with the term `NULL`.
<2> An empty array does not contain an explicit `null`, and so won't be replaced with the `null_value`.
<3> A query for `NULL` returns document 1, but not document 2.

View File

@ -11,7 +11,7 @@ size of this gap is configured using `position_increment_gap` and defaults to
For example:
[source,js]
[source,console]
--------------------------------------------------
PUT my_index/_doc/1
{
@ -41,7 +41,7 @@ GET my_index/_search
}
}
--------------------------------------------------
// CONSOLE
<1> This phrase query doesn't match our document which is totally expected.
<2> This phrase query matches our document, even though `Abraham` and `Lincoln`
are in separate strings, because `slop` > `position_increment_gap`.
@ -49,7 +49,7 @@ GET my_index/_search
The `position_increment_gap` can be specified in the mapping. For instance:
[source,js]
[source,console]
--------------------------------------------------
PUT my_index
{
@ -77,7 +77,7 @@ GET my_index/_search
}
}
--------------------------------------------------
// CONSOLE
<1> The first term in the next array element will be 0 terms apart from the
last term in the previous array element.
<2> The phrase query matches our document which is weird, but its what we asked

View File

@ -13,7 +13,7 @@ be added:
Below is an example of adding `properties` to a mapping type, an `object`
field, and a `nested` field:
[source,js]
[source,console]
--------------------------------------------------
PUT my_index
{
@ -55,7 +55,7 @@ PUT my_index/_doc/1 <4>
]
}
--------------------------------------------------
// CONSOLE
<1> Properties in the top-level mappings definition.
<2> Properties under the `manager` object field.
<3> Properties under the `employees` nested field.
@ -70,7 +70,7 @@ fields using the <<indices-put-mapping,PUT mapping API>>.
Inner fields can be referred to in queries, aggregations, etc., using _dot
notation_:
[source,js]
[source,console]
--------------------------------------------------
GET my_index/_search
{
@ -96,7 +96,6 @@ GET my_index/_search
}
}
--------------------------------------------------
// CONSOLE
// TEST[continued]
IMPORTANT: The full path to the inner field must be specified.

View File

@ -12,7 +12,7 @@ tokenizer for autocomplete.
By default, queries will use the `analyzer` defined in the field mapping, but
this can be overridden with the `search_analyzer` setting:
[source,js]
[source,console]
--------------------------------------------------
PUT my_index
{
@ -66,7 +66,6 @@ GET my_index/_search
}
--------------------------------------------------
// CONSOLE
<1> Analysis settings to define the custom `autocomplete` analyzer.
<2> The `text` field uses the `autocomplete` analyzer at index time, but the `standard` analyzer at search time.

View File

@ -34,7 +34,7 @@ configuration are:
The `similarity` can be set on the field level when a field is first created,
as follows:
[source,js]
[source,console]
--------------------------------------------------
PUT my_index
{
@ -51,6 +51,6 @@ PUT my_index
}
}
--------------------------------------------------
// CONSOLE
<1> The `default_field` uses the `BM25` similarity.
<2> The `boolean_sim_field` uses the `boolean` similarity.

View File

@ -16,7 +16,7 @@ you have a document with a `title`, a `date`, and a very large `content`
field, you may want to retrieve just the `title` and the `date` without having
to extract those fields from a large `_source` field:
[source,js]
[source,console]
--------------------------------------------------
PUT my_index
{
@ -49,7 +49,7 @@ GET my_index/_search
"stored_fields": [ "title", "date" ] <2>
}
--------------------------------------------------
// CONSOLE
<1> The `title` and `date` fields are stored.
<2> This request will retrieve the values of the `title` and `date` fields.

View File

@ -31,7 +31,7 @@ The fast vector highlighter requires `with_positions_offsets`.
WARNING: Setting `with_positions_offsets` will double the size of a field's
index.
[source,js]
[source,console]
--------------------------------------------------
PUT my_index
{
@ -64,7 +64,7 @@ GET my_index/_search
}
}
--------------------------------------------------
// CONSOLE
<1> The fast vector highlighter will be used by default for the `text` field
because term vectors are enabled.

View File

@ -454,7 +454,7 @@ warnings in 6.8, the parameter can be set to either `true` or `false`. In 7.0, s
See some examples of interactions with Elasticsearch with this option set to `false`:
[source,js]
[source,console]
--------------------------------------------------
PUT index?include_type_name=false
{
@ -467,10 +467,10 @@ PUT index?include_type_name=false
}
}
--------------------------------------------------
// CONSOLE
<1> Mappings are included directly under the `mappings` key, without a type name.
[source,js]
[source,console]
--------------------------------------------------
PUT index/_mappings?include_type_name=false
{
@ -481,15 +481,14 @@ PUT index/_mappings?include_type_name=false
}
}
--------------------------------------------------
// CONSOLE
// TEST[continued]
<1> Mappings are included directly under the `mappings` key, without a type name.
[source,js]
[source,console]
--------------------------------------------------
GET index/_mappings?include_type_name=false
--------------------------------------------------
// CONSOLE
// TEST[continued]
The above call returns
@ -520,14 +519,13 @@ The above call returns
In 7.0, index APIs must be called with the `{index}/_doc` path for automatic
generation of the `_id` and `{index}/_doc/{id}` with explicit ids.
[source,js]
[source,console]
--------------------------------------------------
PUT index/_doc/1
{
"foo": "baz"
}
--------------------------------------------------
// CONSOLE
[source,console-result]
--------------------------------------------------
@ -549,11 +547,10 @@ PUT index/_doc/1
Similarly, the `get` and `delete` APIs use the path `{index}/_doc/{id}`:
[source,js]
[source,console]
--------------------------------------------------
GET index/_doc/1
--------------------------------------------------
// CONSOLE
// TEST[continued]
NOTE: In 7.0, `_doc` represents the endpoint name instead of the document type.
@ -563,7 +560,7 @@ The `_doc` component is a permanent part of the path for the document `index`,
For API paths that contain both a type and endpoint name like `_update`,
in 7.0 the endpoint will immediately follow the index name:
[source,js]
[source,console]
--------------------------------------------------
POST index/_update/1
{
@ -574,14 +571,13 @@ POST index/_update/1
GET /index/_source/1
--------------------------------------------------
// CONSOLE
// TEST[continued]
Types should also no longer appear in the body of requests. The following
example of bulk indexing omits the type both in the URL, and in the individual
bulk commands:
[source,js]
[source,console]
--------------------------------------------------
POST _bulk
{ "index" : { "_index" : "index", "_id" : "3" } }
@ -589,7 +585,6 @@ POST _bulk
{ "index" : { "_index" : "index", "_id" : "4" } }
{ "foo" : "qux" }
--------------------------------------------------
// CONSOLE
[float]
==== Search APIs
@ -612,7 +607,7 @@ in the response. For example, the following typeless `get` call will always
return `_doc` as the type, even if the mapping has a custom type name like
`my_type`:
[source,js]
[source,console]
--------------------------------------------------
PUT index/my_type/1
{
@ -621,7 +616,6 @@ PUT index/my_type/1
GET index/_doc/1
--------------------------------------------------
// CONSOLE
[source,console-result]
--------------------------------------------------
@ -655,7 +649,7 @@ will be typeless in spite of the fact that it matches a template that defines
a type. Both `index-1-01` and `index-2-01` will inherit the `foo` field from
the template that they match.
[source,js]
[source,console]
--------------------------------------------------
PUT _template/template1
{
@ -707,7 +701,6 @@ PUT index-2-01
}
}
--------------------------------------------------
// CONSOLE
In case of implicit index creation, because of documents that get indexed in
an index that doesn't exist yet, the template is always honored. This is

Some files were not shown because too many files have changed in this diff Show More