[DOCS] Remove heading offsets for REST APIs (#44568)
Several files in the REST APIs nav section are included using :leveloffset: tags. This increments headings (h2 -> h3, h3 -> h4, etc.) in those files and removes the :leveloffset: tags. Other supporting changes: * Alphabetizes top-level REST API nav items. * Change 'indices APIs' heading to 'index APIs.' * Changes 'Snapshot lifecycle management' heading to sentence case.
This commit is contained in:
parent
f028ab43ad
commit
a63f60b776
|
@ -1,5 +1,5 @@
|
|||
[[api-conventions]]
|
||||
= API conventions
|
||||
== API conventions
|
||||
|
||||
The *Elasticsearch* REST APIs are exposed using <<modules-http,JSON over HTTP>>.
|
||||
|
||||
|
@ -12,7 +12,7 @@ API, unless otherwise specified.
|
|||
* <<url-access-control>>
|
||||
|
||||
[[multi-index]]
|
||||
== Multiple Indices
|
||||
=== Multiple Indices
|
||||
|
||||
Most APIs that refer to an `index` parameter support execution across multiple indices,
|
||||
using simple `test1,test2,test3` notation (or `_all` for all indices). It also
|
||||
|
@ -55,7 +55,7 @@ NOTE: Single index APIs such as the <<docs>> and the
|
|||
<<indices-aliases,single-index `alias` APIs>> do not support multiple indices.
|
||||
|
||||
[[date-math-index-names]]
|
||||
== Date math support in index names
|
||||
=== Date math support in index names
|
||||
|
||||
Date math index name resolution enables you to search a range of time-series indices, rather
|
||||
than searching all of your time-series indices and filtering the results or maintaining aliases.
|
||||
|
@ -164,12 +164,12 @@ GET /%3Clogstash-%7Bnow%2Fd-2d%7D%3E%2C%3Clogstash-%7Bnow%2Fd-1d%7D%3E%2C%3Clogs
|
|||
// TEST[s/now/2016.09.20||/]
|
||||
|
||||
[[common-options]]
|
||||
== Common options
|
||||
=== Common options
|
||||
|
||||
The following options can be applied to all of the REST APIs.
|
||||
|
||||
[float]
|
||||
=== Pretty Results
|
||||
==== Pretty Results
|
||||
|
||||
When appending `?pretty=true` to any request made, the JSON returned
|
||||
will be pretty formatted (use it for debugging only!). Another option is
|
||||
|
@ -178,7 +178,7 @@ to set `?format=yaml` which will cause the result to be returned in the
|
|||
|
||||
|
||||
[float]
|
||||
=== Human readable output
|
||||
==== Human readable output
|
||||
|
||||
Statistics are returned in a format suitable for humans
|
||||
(e.g. `"exists_time": "1h"` or `"size": "1kb"`) and for computers
|
||||
|
@ -191,7 +191,7 @@ consumption. The default for the `human` flag is
|
|||
|
||||
[[date-math]]
|
||||
[float]
|
||||
=== Date Math
|
||||
==== Date Math
|
||||
|
||||
Most parameters which accept a formatted date value -- such as `gt` and `lt`
|
||||
in <<query-dsl-range-query,`range` queries>>, or `from` and `to`
|
||||
|
@ -229,7 +229,7 @@ Assuming `now` is `2001-01-01 12:00:00`, some examples are:
|
|||
|
||||
[float]
|
||||
[[common-options-response-filtering]]
|
||||
=== Response Filtering
|
||||
==== Response Filtering
|
||||
|
||||
All REST APIs accept a `filter_path` parameter that can be used to reduce
|
||||
the response returned by Elasticsearch. This parameter takes a comma
|
||||
|
@ -396,7 +396,7 @@ GET /_search?filter_path=hits.hits._source&_source=title&sort=rating:desc
|
|||
|
||||
|
||||
[float]
|
||||
=== Flat Settings
|
||||
==== Flat Settings
|
||||
|
||||
The `flat_settings` flag affects rendering of the lists of settings. When the
|
||||
`flat_settings` flag is `true`, settings are returned in a flat format:
|
||||
|
@ -467,27 +467,27 @@ Returns:
|
|||
By default `flat_settings` is set to `false`.
|
||||
|
||||
[float]
|
||||
=== Parameters
|
||||
==== Parameters
|
||||
|
||||
Rest parameters (when using HTTP, map to HTTP URL parameters) follow the
|
||||
convention of using underscore casing.
|
||||
|
||||
[float]
|
||||
=== Boolean Values
|
||||
==== Boolean Values
|
||||
|
||||
All REST API parameters (both request parameters and JSON body) support
|
||||
providing boolean "false" as the value `false` and boolean "true" as the
|
||||
value `true`. All other values will raise an error.
|
||||
|
||||
[float]
|
||||
=== Number Values
|
||||
==== Number Values
|
||||
|
||||
All REST APIs support providing numbered parameters as `string` on top
|
||||
of supporting the native JSON number types.
|
||||
|
||||
[[time-units]]
|
||||
[float]
|
||||
=== Time units
|
||||
==== Time units
|
||||
|
||||
Whenever durations need to be specified, e.g. for a `timeout` parameter, the duration must specify
|
||||
the unit, like `2d` for 2 days. The supported units are:
|
||||
|
@ -503,7 +503,7 @@ the unit, like `2d` for 2 days. The supported units are:
|
|||
|
||||
[[byte-units]]
|
||||
[float]
|
||||
=== Byte size units
|
||||
==== Byte size units
|
||||
|
||||
Whenever the byte size of data needs to be specified, e.g. when setting a buffer size
|
||||
parameter, the value must specify the unit, like `10kb` for 10 kilobytes. Note that
|
||||
|
@ -519,7 +519,7 @@ these units use powers of 1024, so `1kb` means 1024 bytes. The supported units a
|
|||
|
||||
[[size-units]]
|
||||
[float]
|
||||
=== Unit-less quantities
|
||||
==== Unit-less quantities
|
||||
|
||||
Unit-less quantities means that they don't have a "unit" like "bytes" or "Hertz" or "meter" or "long tonne".
|
||||
|
||||
|
@ -535,7 +535,7 @@ when we mean 87 though. These are the supported multipliers:
|
|||
|
||||
[[distance-units]]
|
||||
[float]
|
||||
=== Distance Units
|
||||
==== Distance Units
|
||||
|
||||
Wherever distances need to be specified, such as the `distance` parameter in
|
||||
the <<query-dsl-geo-distance-query>>), the default unit is meters if none is specified.
|
||||
|
@ -557,7 +557,7 @@ Nautical mile:: `NM`, `nmi`, or `nauticalmiles`
|
|||
|
||||
[[fuzziness]]
|
||||
[float]
|
||||
=== Fuzziness
|
||||
==== Fuzziness
|
||||
|
||||
Some queries and APIs support parameters to allow inexact _fuzzy_ matching,
|
||||
using the `fuzziness` parameter.
|
||||
|
@ -590,7 +590,7 @@ the default values are 3 and 6, equivalent to `AUTO:3,6` that make for lengths:
|
|||
|
||||
[float]
|
||||
[[common-options-error-options]]
|
||||
=== Enabling stack traces
|
||||
==== Enabling stack traces
|
||||
|
||||
By default when a request returns an error Elasticsearch doesn't include the
|
||||
stack trace of the error. You can enable that behavior by setting the
|
||||
|
@ -668,7 +668,7 @@ The response looks like:
|
|||
// TESTRESPONSE[s/"stack_trace": "java.lang.Number.+\.\.\."/"stack_trace": $body.error.caused_by.stack_trace/]
|
||||
|
||||
[float]
|
||||
=== Request body in query string
|
||||
==== Request body in query string
|
||||
|
||||
For libraries that don't accept a request body for non-POST requests,
|
||||
you can pass the request body as the `source` query string parameter
|
||||
|
@ -677,7 +677,7 @@ should also be passed with a media type value that indicates the format
|
|||
of the source, such as `application/json`.
|
||||
|
||||
[float]
|
||||
=== Content-Type Requirements
|
||||
==== Content-Type Requirements
|
||||
|
||||
The type of the content sent in a request body must be specified using
|
||||
the `Content-Type` header. The value of this header must map to one of
|
||||
|
@ -690,7 +690,7 @@ content type must be specified using the `source_content_type` query
|
|||
string parameter.
|
||||
|
||||
[[url-access-control]]
|
||||
== URL-based access control
|
||||
=== URL-based access control
|
||||
|
||||
Many users use a proxy with URL-based access control to secure access to
|
||||
Elasticsearch indices. For <<search-multi-search,multi-search>>,
|
||||
|
|
|
@ -1,8 +1,8 @@
|
|||
[[cat]]
|
||||
= cat APIs
|
||||
== cat APIs
|
||||
|
||||
["float",id="intro"]
|
||||
== Introduction
|
||||
=== Introduction
|
||||
|
||||
JSON is great... for computers. Even if it's pretty-printed, trying
|
||||
to find relationships in the data is tedious. Human eyes, especially
|
||||
|
@ -15,11 +15,11 @@ the available commands.
|
|||
|
||||
[float]
|
||||
[[common-parameters]]
|
||||
== Common parameters
|
||||
=== Common parameters
|
||||
|
||||
[float]
|
||||
[[verbose]]
|
||||
=== Verbose
|
||||
==== Verbose
|
||||
|
||||
Each of the commands accepts a query string parameter `v` to turn on
|
||||
verbose output. For example:
|
||||
|
@ -41,7 +41,7 @@ u_n93zwxThWHi1PDBJAGAg 127.0.0.1 127.0.0.1 u_n93zw
|
|||
|
||||
[float]
|
||||
[[help]]
|
||||
=== Help
|
||||
==== Help
|
||||
|
||||
Each of the commands accepts a query string parameter `help` which will
|
||||
output its available columns. For example:
|
||||
|
@ -70,7 +70,7 @@ instead.
|
|||
|
||||
[float]
|
||||
[[headers]]
|
||||
=== Headers
|
||||
==== Headers
|
||||
|
||||
Each of the commands accepts a query string parameter `h` which forces
|
||||
only those columns to appear. For example:
|
||||
|
@ -95,7 +95,7 @@ with `queue`.
|
|||
|
||||
[float]
|
||||
[[numeric-formats]]
|
||||
=== Numeric formats
|
||||
==== Numeric formats
|
||||
|
||||
Many commands provide a few types of numeric output, either a byte, size
|
||||
or a time value. By default, these types are human-formatted,
|
||||
|
@ -126,7 +126,7 @@ If you want to change the <<size-units,size units>>, use `size` parameter.
|
|||
If you want to change the <<byte-units,byte units>>, use `bytes` parameter.
|
||||
|
||||
[float]
|
||||
=== Response as text, json, smile, yaml or cbor
|
||||
==== Response as text, json, smile, yaml or cbor
|
||||
|
||||
[source,sh]
|
||||
--------------------------------------------------
|
||||
|
@ -179,7 +179,7 @@ For example:
|
|||
|
||||
[float]
|
||||
[[sort]]
|
||||
=== Sort
|
||||
==== Sort
|
||||
|
||||
Each of the commands accepts a query string parameter `s` which sorts the table by
|
||||
the columns specified as the parameter value. Columns are specified either by name or by
|
||||
|
|
|
@ -1,5 +1,5 @@
|
|||
[[cat-alias]]
|
||||
== cat aliases
|
||||
=== cat aliases
|
||||
|
||||
`aliases` shows information about currently configured aliases to indices
|
||||
including filter and routing infos.
|
||||
|
|
|
@ -1,5 +1,5 @@
|
|||
[[cat-allocation]]
|
||||
== cat allocation
|
||||
=== cat allocation
|
||||
|
||||
`allocation` provides a snapshot of how many shards are allocated to each data node
|
||||
and how much disk space they are using.
|
||||
|
|
|
@ -1,5 +1,5 @@
|
|||
[[cat-count]]
|
||||
== cat count
|
||||
=== cat count
|
||||
|
||||
`count` provides quick access to the document count of the entire
|
||||
cluster, or individual indices.
|
||||
|
|
|
@ -1,5 +1,5 @@
|
|||
[[cat-fielddata]]
|
||||
== cat fielddata
|
||||
=== cat fielddata
|
||||
|
||||
`fielddata` shows how much heap memory is currently being used by fielddata
|
||||
on every data node in the cluster.
|
||||
|
|
|
@ -1,5 +1,5 @@
|
|||
[[cat-health]]
|
||||
== cat health
|
||||
=== cat health
|
||||
|
||||
`health` is a terse, one-line representation of the same information
|
||||
from `/_cluster/health`.
|
||||
|
@ -75,7 +75,7 @@ static, we would have an idea that there is a problem.
|
|||
|
||||
[float]
|
||||
[[timestamp]]
|
||||
=== Why the timestamp?
|
||||
==== Why the timestamp?
|
||||
|
||||
You typically are using the `health` command when a cluster is
|
||||
malfunctioning. During this period, it's extremely important to
|
||||
|
|
|
@ -1,5 +1,5 @@
|
|||
[[cat-indices]]
|
||||
== cat indices
|
||||
=== cat indices
|
||||
|
||||
The `indices` command provides a cross-section of each index. This
|
||||
information *spans nodes*. For example:
|
||||
|
@ -37,7 +37,7 @@ is to use either the <<cat-count>> or the <<search-count>>
|
|||
|
||||
[float]
|
||||
[[pri-flag]]
|
||||
=== Primaries
|
||||
==== Primaries
|
||||
|
||||
The index stats by default will show them for all of an index's
|
||||
shards, including replicas. A `pri` flag can be supplied to enable
|
||||
|
@ -45,7 +45,7 @@ the view of relevant stats in the context of only the primaries.
|
|||
|
||||
[float]
|
||||
[[examples]]
|
||||
=== Examples
|
||||
==== Examples
|
||||
|
||||
Which indices are yellow?
|
||||
|
||||
|
|
|
@ -1,5 +1,5 @@
|
|||
[[cat-master]]
|
||||
== cat master
|
||||
=== cat master
|
||||
|
||||
`master` doesn't have any extra options. It simply displays the
|
||||
master's node ID, bound IP address, and node name. For example:
|
||||
|
|
|
@ -1,5 +1,5 @@
|
|||
[[cat-nodeattrs]]
|
||||
== cat nodeattrs
|
||||
=== cat nodeattrs
|
||||
|
||||
The `nodeattrs` command shows custom node attributes.
|
||||
For example:
|
||||
|
@ -32,7 +32,7 @@ and the `attr` and `value` columns give you the custom node attributes,
|
|||
one per line.
|
||||
|
||||
[float]
|
||||
=== Columns
|
||||
==== Columns
|
||||
|
||||
Below is an exhaustive list of the existing headers that can be
|
||||
passed to `nodeattrs?h=` to retrieve the relevant details in ordered
|
||||
|
|
|
@ -1,5 +1,5 @@
|
|||
[[cat-nodes]]
|
||||
== cat nodes
|
||||
=== cat nodes
|
||||
|
||||
The `nodes` command shows the cluster topology. For example
|
||||
|
||||
|
@ -33,7 +33,7 @@ requested with `id` or `nodeId`) in its full length or in abbreviated form (the
|
|||
default).
|
||||
|
||||
[float]
|
||||
=== Columns
|
||||
==== Columns
|
||||
|
||||
Below is an exhaustive list of the existing headers that can be
|
||||
passed to `nodes?h=` to retrieve the relevant details in ordered
|
||||
|
|
|
@ -1,5 +1,5 @@
|
|||
[[cat-pending-tasks]]
|
||||
== cat pending tasks
|
||||
=== cat pending tasks
|
||||
|
||||
`pending_tasks` provides the same information as the
|
||||
<<cluster-pending,`/_cluster/pending_tasks`>> API in a
|
||||
|
|
|
@ -1,5 +1,5 @@
|
|||
[[cat-plugins]]
|
||||
== cat plugins
|
||||
=== cat plugins
|
||||
|
||||
The `plugins` command provides a view per node of running plugins. This information *spans nodes*.
|
||||
|
||||
|
|
|
@ -1,5 +1,5 @@
|
|||
[[cat-recovery]]
|
||||
== cat recovery
|
||||
=== cat recovery
|
||||
|
||||
The `recovery` command is a view of index shard recoveries, both on-going and previously
|
||||
completed. It is a more compact view of the JSON <<indices-recovery,recovery>> API.
|
||||
|
|
|
@ -1,5 +1,5 @@
|
|||
[[cat-repositories]]
|
||||
== cat repositories
|
||||
=== cat repositories
|
||||
|
||||
The `repositories` command shows the snapshot repositories registered in the
|
||||
cluster. For example:
|
||||
|
|
|
@ -1,5 +1,5 @@
|
|||
[[cat-segments]]
|
||||
== cat segments
|
||||
=== cat segments
|
||||
|
||||
The `segments` command provides low level information about the segments
|
||||
in the shards of an index. It provides information similar to the
|
||||
|
|
|
@ -1,5 +1,5 @@
|
|||
[[cat-shards]]
|
||||
== cat shards
|
||||
=== cat shards
|
||||
|
||||
The `shards` command is the detailed view of what nodes contain which
|
||||
shards. It will tell you if it's a primary or replica, the number of
|
||||
|
@ -27,7 +27,7 @@ twitter 0 p STARTED 3014 31.1mb 192.168.56.10 H5dfFeA
|
|||
|
||||
[float]
|
||||
[[index-pattern]]
|
||||
=== Index pattern
|
||||
==== Index pattern
|
||||
|
||||
If you have many shards, you may wish to limit which indices show up
|
||||
in the output. You can always do this with `grep`, but you can save
|
||||
|
@ -54,7 +54,7 @@ twitter 0 p STARTED 3014 31.1mb 192.168.56.10 H5dfFeA
|
|||
|
||||
[float]
|
||||
[[relocation]]
|
||||
=== Relocation
|
||||
==== Relocation
|
||||
|
||||
Let's say you've checked your health and you see relocating
|
||||
shards. Where are they from and where are they going?
|
||||
|
@ -76,7 +76,7 @@ twitter 0 p RELOCATING 3014 31.1mb 192.168.56.10 H5dfFeA -> -> 192.168.56.30 bGG
|
|||
|
||||
[float]
|
||||
[[states]]
|
||||
=== Shard states
|
||||
==== Shard states
|
||||
|
||||
Before a shard can be used, it goes through an `INITIALIZING` state.
|
||||
`shards` can show you which ones.
|
||||
|
@ -123,7 +123,7 @@ twitter 0 r UNASSIGNED ALLOCATION_FAILED
|
|||
|
||||
[float]
|
||||
[[reason-unassigned]]
|
||||
=== Reasons for unassigned shard
|
||||
==== Reasons for unassigned shard
|
||||
|
||||
These are the possible reasons for a shard to be in a unassigned state:
|
||||
|
||||
|
|
|
@ -1,5 +1,5 @@
|
|||
[[cat-snapshots]]
|
||||
== cat snapshots
|
||||
=== cat snapshots
|
||||
|
||||
The `snapshots` command shows all snapshots that belong to a specific repository.
|
||||
To find a list of available repositories to query, the command `/_cat/repositories` can be used.
|
||||
|
|
|
@ -1,5 +1,5 @@
|
|||
[[cat-templates]]
|
||||
== cat templates
|
||||
=== cat templates
|
||||
|
||||
The `templates` command provides information about existing templates.
|
||||
|
||||
|
|
|
@ -1,5 +1,5 @@
|
|||
[[cat-thread-pool]]
|
||||
== cat thread pool
|
||||
=== cat thread pool
|
||||
|
||||
The `thread_pool` command shows cluster wide thread pool statistics per node. By default the active, queue and rejected
|
||||
statistics are returned for all thread pools.
|
||||
|
@ -113,7 +113,7 @@ Here the host columns and the active, rejected and completed suggest thread pool
|
|||
|
||||
All <<modules-threadpool,built-in thread pools>> and custom thread pools are available.
|
||||
[float]
|
||||
==== Thread Pool Fields
|
||||
===== Thread Pool Fields
|
||||
|
||||
For each thread pool, you can load details about it by using the field names
|
||||
in the table below.
|
||||
|
@ -136,7 +136,7 @@ in the table below.
|
|||
|=======================================================================
|
||||
|
||||
[float]
|
||||
=== Other Fields
|
||||
==== Other Fields
|
||||
|
||||
In addition to details about each thread pool, it is also convenient to get an
|
||||
understanding of where those thread pools reside. As such, you can request
|
||||
|
|
|
@ -1,8 +1,8 @@
|
|||
[[cluster]]
|
||||
= Cluster APIs
|
||||
== Cluster APIs
|
||||
|
||||
["float",id="cluster-nodes"]
|
||||
== Node specification
|
||||
=== Node specification
|
||||
|
||||
Some cluster-level APIs may operate on a subset of the nodes which can be
|
||||
specified with _node filters_. For example, the <<tasks,Task Management>>,
|
||||
|
|
|
@ -1,5 +1,5 @@
|
|||
[[cluster-allocation-explain]]
|
||||
== Cluster Allocation Explain API
|
||||
=== Cluster Allocation Explain API
|
||||
|
||||
The purpose of the cluster allocation explain API is to provide
|
||||
explanations for shard allocations in the cluster. For unassigned shards,
|
||||
|
@ -11,7 +11,7 @@ a shard is unassigned or why a shard continues to remain on its current node
|
|||
when you might expect otherwise.
|
||||
|
||||
[float]
|
||||
=== Explain API Request
|
||||
==== Explain API Request
|
||||
|
||||
To explain the allocation of a shard, first an index should exist:
|
||||
|
||||
|
@ -68,7 +68,7 @@ GET /_cluster/allocation/explain
|
|||
// CONSOLE
|
||||
|
||||
[float]
|
||||
=== Explain API Response
|
||||
==== Explain API Response
|
||||
|
||||
This section includes examples of the cluster allocation explain API response output
|
||||
under various scenarios.
|
||||
|
|
|
@ -1,5 +1,5 @@
|
|||
[[cluster-get-settings]]
|
||||
== Cluster Get Settings
|
||||
=== Cluster Get Settings
|
||||
|
||||
The cluster get settings API allows to retrieve the cluster wide settings.
|
||||
|
||||
|
|
|
@ -1,5 +1,5 @@
|
|||
[[cluster-health]]
|
||||
== Cluster Health
|
||||
=== Cluster Health
|
||||
|
||||
The cluster health API allows to get a very simple status on the health
|
||||
of the cluster. For example, on a quiet single node cluster with a single index
|
||||
|
@ -70,7 +70,7 @@ GET /_cluster/health?wait_for_status=yellow&timeout=50s
|
|||
|
||||
[float]
|
||||
[[request-params]]
|
||||
=== Request Parameters
|
||||
==== Request Parameters
|
||||
|
||||
The cluster health API accepts the following request parameters:
|
||||
|
||||
|
|
|
@ -1,5 +1,5 @@
|
|||
[[cluster-nodes-hot-threads]]
|
||||
== Nodes hot_threads
|
||||
=== Nodes hot_threads
|
||||
|
||||
This API yields a breakdown of the hot threads on each selected node in the
|
||||
cluster. Its endpoints are `/_nodes/hot_threads` and
|
||||
|
|
|
@ -1,5 +1,5 @@
|
|||
[[cluster-nodes-info]]
|
||||
== Nodes Info
|
||||
=== Nodes Info
|
||||
|
||||
The cluster nodes info API allows to retrieve one or more (or all) of
|
||||
the cluster nodes information.
|
||||
|
|
|
@ -1,8 +1,8 @@
|
|||
[[cluster-nodes-stats]]
|
||||
== Nodes Stats
|
||||
=== Nodes Stats
|
||||
|
||||
[float]
|
||||
=== Nodes statistics
|
||||
==== Nodes statistics
|
||||
|
||||
The cluster nodes stats API allows to retrieve one or more (or all) of
|
||||
the cluster nodes statistics.
|
||||
|
@ -85,7 +85,7 @@ All stats can be explicitly requested via `/_nodes/stats/_all` or `/_nodes/stats
|
|||
|
||||
[float]
|
||||
[[fs-info]]
|
||||
==== FS information
|
||||
===== FS information
|
||||
|
||||
The `fs` flag can be set to retrieve
|
||||
information that concern the file system:
|
||||
|
@ -176,7 +176,7 @@ information that concern the file system:
|
|||
|
||||
[float]
|
||||
[[os-stats]]
|
||||
==== Operating System statistics
|
||||
===== Operating System statistics
|
||||
|
||||
The `os` flag can be set to retrieve statistics that concern
|
||||
the operating system:
|
||||
|
@ -280,7 +280,7 @@ and `/sys/fs/cgroup/cpuacct`.
|
|||
|
||||
[float]
|
||||
[[process-stats]]
|
||||
==== Process statistics
|
||||
===== Process statistics
|
||||
|
||||
The `process` flag can be set to retrieve statistics that concern
|
||||
the current running process:
|
||||
|
@ -305,7 +305,7 @@ the current running process:
|
|||
|
||||
[float]
|
||||
[[node-indices-stats]]
|
||||
=== Indices statistics
|
||||
==== Indices statistics
|
||||
|
||||
You can get information about indices stats on `node`, `indices`, or `shards` level.
|
||||
|
||||
|
@ -346,7 +346,7 @@ Supported metrics are:
|
|||
|
||||
[float]
|
||||
[[search-groups]]
|
||||
=== Search groups
|
||||
==== Search groups
|
||||
|
||||
You can get statistics about search groups for searches executed
|
||||
on this node.
|
||||
|
@ -363,7 +363,7 @@ GET /_nodes/stats/indices?groups=foo,bar
|
|||
|
||||
[float]
|
||||
[[ingest-stats]]
|
||||
=== Ingest statistics
|
||||
==== Ingest statistics
|
||||
|
||||
The `ingest` flag can be set to retrieve statistics that concern ingest:
|
||||
|
||||
|
@ -383,7 +383,7 @@ On top of these overall ingest statistics, these statistics are also provided on
|
|||
|
||||
[float]
|
||||
[[adaptive-selection-stats]]
|
||||
=== Adaptive selection statistics
|
||||
==== Adaptive selection statistics
|
||||
|
||||
The `adaptive_selection` flag can be set to retrieve statistics that concern
|
||||
<<search-adaptive-replica,adaptive replica selection>>. These statistics are
|
||||
|
|
|
@ -1,8 +1,8 @@
|
|||
[[cluster-nodes-usage]]
|
||||
== Nodes Feature Usage
|
||||
=== Nodes Feature Usage
|
||||
|
||||
[float]
|
||||
=== Nodes usage
|
||||
==== Nodes usage
|
||||
|
||||
The cluster nodes usage API allows to retrieve information on the usage
|
||||
of features for each node.
|
||||
|
@ -23,7 +23,7 @@ second command selectively retrieves nodes usage of only `nodeId1` and
|
|||
|
||||
[float]
|
||||
[[rest-usage]]
|
||||
==== REST actions usage information
|
||||
===== REST actions usage information
|
||||
|
||||
The `rest_actions` field in the response contains a map of the REST
|
||||
actions classname with a count of the number of times that action has
|
||||
|
|
|
@ -1,5 +1,5 @@
|
|||
[[cluster-pending]]
|
||||
== Pending cluster tasks
|
||||
=== Pending cluster tasks
|
||||
|
||||
The pending cluster tasks API returns a list of any cluster-level changes
|
||||
(e.g. create index, update mapping, allocate or fail shard) which have not yet
|
||||
|
|
|
@ -1,5 +1,5 @@
|
|||
[[cluster-remote-info]]
|
||||
== Remote Cluster Info
|
||||
=== Remote Cluster Info
|
||||
|
||||
The cluster remote info API allows to retrieve all of the configured
|
||||
remote cluster information.
|
||||
|
|
|
@ -1,5 +1,5 @@
|
|||
[[cluster-reroute]]
|
||||
== Cluster Reroute
|
||||
=== Cluster Reroute
|
||||
|
||||
The reroute command allows for manual changes to the allocation of individual
|
||||
shards in the cluster. For example, a shard can be moved from one node to
|
||||
|
@ -77,7 +77,7 @@ The commands supported are:
|
|||
<<modules-cluster,allocation deciders>> into account.
|
||||
|
||||
[float]
|
||||
=== Retrying failed allocations
|
||||
==== Retrying failed allocations
|
||||
|
||||
The cluster will attempt to allocate a shard a maximum of
|
||||
`index.allocation.max_retries` times in a row (defaults to `5`), before giving
|
||||
|
@ -90,7 +90,7 @@ calling the <<cluster-reroute,`reroute`>> API with the `?retry_failed` URI
|
|||
query parameter, which will attempt a single retry round for these shards.
|
||||
|
||||
[float]
|
||||
=== Forced allocation on unrecoverable errors
|
||||
==== Forced allocation on unrecoverable errors
|
||||
|
||||
Two more commands are available that allow the allocation of a primary shard to
|
||||
a node. These commands should however be used with extreme care, as primary
|
||||
|
|
|
@ -1,5 +1,5 @@
|
|||
[[cluster-state]]
|
||||
== Cluster State
|
||||
=== Cluster State
|
||||
|
||||
The cluster state API allows access to metadata representing the state of the
|
||||
whole cluster. This includes information such as
|
||||
|
@ -39,7 +39,7 @@ retrieve the cluster state local to a particular node by adding `local=true` to
|
|||
the query string.
|
||||
|
||||
[float]
|
||||
=== Response Filters
|
||||
==== Response Filters
|
||||
|
||||
The cluster state contains information about all the indices in the cluster,
|
||||
including their mappings, as well as templates and other metadata. This means it
|
||||
|
|
|
@ -1,5 +1,5 @@
|
|||
[[cluster-stats]]
|
||||
== Cluster Stats
|
||||
=== Cluster Stats
|
||||
|
||||
The Cluster Stats API allows to retrieve statistics from a cluster wide perspective.
|
||||
The API returns basic index metrics (shard numbers, store size, memory usage) and
|
||||
|
|
|
@ -1,10 +1,10 @@
|
|||
[[tasks]]
|
||||
== Task Management API
|
||||
=== Task Management API
|
||||
|
||||
beta[The Task Management API is new and should still be considered a beta feature. The API may change in ways that are not backwards compatible]
|
||||
|
||||
[float]
|
||||
=== Current Tasks Information
|
||||
==== Current Tasks Information
|
||||
|
||||
The task management API allows to retrieve information about the tasks currently
|
||||
executing on one or more nodes in the cluster.
|
||||
|
@ -177,7 +177,7 @@ GET _cat/tasks?detailed
|
|||
|
||||
[float]
|
||||
[[task-cancellation]]
|
||||
=== Task Cancellation
|
||||
==== Task Cancellation
|
||||
|
||||
If a long-running task supports cancellation, it can be cancelled with the cancel
|
||||
tasks API. The following example cancels task `oTUltX4IQMOUUVeiohTt8A:12345`:
|
||||
|
@ -199,7 +199,7 @@ POST _tasks/_cancel?nodes=nodeId1,nodeId2&actions=*reindex
|
|||
// CONSOLE
|
||||
|
||||
[float]
|
||||
=== Task Grouping
|
||||
==== Task Grouping
|
||||
|
||||
The task lists returned by task API commands can be grouped either by nodes (default) or by parent tasks using the `group_by` parameter.
|
||||
The following command will change the grouping to parent tasks:
|
||||
|
@ -219,7 +219,7 @@ GET _tasks?group_by=none
|
|||
// CONSOLE
|
||||
|
||||
[float]
|
||||
=== Identifying running tasks
|
||||
==== Identifying running tasks
|
||||
|
||||
The `X-Opaque-Id` header, when provided on the HTTP request header, is going to be returned as a header in the response as well as
|
||||
in the `headers` field for in the task information. This allows to track certain calls, or associate certain tasks with
|
||||
|
|
|
@ -1,5 +1,5 @@
|
|||
[[cluster-update-settings]]
|
||||
== Cluster Update Settings
|
||||
=== Cluster Update Settings
|
||||
|
||||
Use this API to review and change cluster-wide settings.
|
||||
|
||||
|
@ -102,7 +102,7 @@ PUT /_cluster/settings
|
|||
|
||||
|
||||
[float]
|
||||
=== Order of Precedence
|
||||
==== Order of Precedence
|
||||
|
||||
The order of precedence for cluster settings is:
|
||||
|
||||
|
|
|
@ -1,5 +1,5 @@
|
|||
[[voting-config-exclusions]]
|
||||
== Voting configuration exclusions API
|
||||
=== Voting configuration exclusions API
|
||||
++++
|
||||
<titleabbrev>Voting Configuration Exclusions</titleabbrev>
|
||||
++++
|
||||
|
@ -8,20 +8,20 @@ Adds or removes master-eligible nodes from the
|
|||
<<modules-discovery-voting,voting configuration exclusion list>>.
|
||||
|
||||
[float]
|
||||
=== Request
|
||||
==== Request
|
||||
|
||||
`POST _cluster/voting_config_exclusions/<node_name>` +
|
||||
|
||||
`DELETE _cluster/voting_config_exclusions`
|
||||
|
||||
[float]
|
||||
=== Path parameters
|
||||
==== Path parameters
|
||||
|
||||
`node_name`::
|
||||
A <<cluster-nodes,node filter>> that identifies {es} nodes.
|
||||
|
||||
[float]
|
||||
=== Description
|
||||
==== Description
|
||||
|
||||
By default, if there are more than three master-eligible nodes in the cluster
|
||||
and you remove fewer than half of the master-eligible nodes in the cluster at
|
||||
|
@ -58,7 +58,7 @@ maintain the voting configuration.
|
|||
For more information, see <<modules-discovery-removing-nodes>>.
|
||||
|
||||
[float]
|
||||
=== Examples
|
||||
==== Examples
|
||||
|
||||
Add `nodeId1` to the voting configuration exclusions list:
|
||||
[source,js]
|
||||
|
|
|
@ -1,5 +1,5 @@
|
|||
[[docs]]
|
||||
= Document APIs
|
||||
== Document APIs
|
||||
|
||||
This section starts with a short introduction to Elasticsearch's <<docs-replication,data replication model>>, followed by a
|
||||
detailed description of the following CRUD APIs:
|
||||
|
|
|
@ -1,5 +1,5 @@
|
|||
[[docs-bulk]]
|
||||
== Bulk API
|
||||
=== Bulk API
|
||||
|
||||
The bulk API makes it possible to perform many index/delete operations
|
||||
in a single API call. This can greatly increase the indexing speed.
|
||||
|
@ -198,7 +198,7 @@ chunks, as this will slow things down.
|
|||
|
||||
[float]
|
||||
[[bulk-optimistic-concurrency-control]]
|
||||
=== Optimistic Concurrency Control
|
||||
==== Optimistic Concurrency Control
|
||||
|
||||
Each `index` and `delete` action within a bulk API call may include the
|
||||
`if_seq_no` and `if_primary_term` parameters in their respective action
|
||||
|
@ -209,7 +209,7 @@ documents. See <<optimistic-concurrency-control>> for more details.
|
|||
|
||||
[float]
|
||||
[[bulk-versioning]]
|
||||
=== Versioning
|
||||
==== Versioning
|
||||
|
||||
Each bulk item can include the version value using the
|
||||
`version` field. It automatically follows the behavior of the
|
||||
|
@ -218,7 +218,7 @@ support the `version_type` (see <<index-versioning, versioning>>).
|
|||
|
||||
[float]
|
||||
[[bulk-routing]]
|
||||
=== Routing
|
||||
==== Routing
|
||||
|
||||
Each bulk item can include the routing value using the
|
||||
`routing` field. It automatically follows the behavior of the
|
||||
|
@ -226,7 +226,7 @@ index / delete operation based on the `_routing` mapping.
|
|||
|
||||
[float]
|
||||
[[bulk-wait-for-active-shards]]
|
||||
=== Wait For Active Shards
|
||||
==== Wait For Active Shards
|
||||
|
||||
When making bulk calls, you can set the `wait_for_active_shards`
|
||||
parameter to require a minimum number of shard copies to be active
|
||||
|
@ -236,7 +236,7 @@ example.
|
|||
|
||||
[float]
|
||||
[[bulk-refresh]]
|
||||
=== Refresh
|
||||
==== Refresh
|
||||
|
||||
Control when the changes made by this request are visible to search. See
|
||||
<<docs-refresh,refresh>>.
|
||||
|
@ -250,7 +250,7 @@ participate in the `_bulk` request at all.
|
|||
|
||||
[float]
|
||||
[[bulk-update]]
|
||||
=== Update
|
||||
==== Update
|
||||
|
||||
When using the `update` action, `retry_on_conflict` can be used as a field in
|
||||
the action itself (not in the extra payload line), to specify how many
|
||||
|
@ -280,11 +280,11 @@ POST _bulk
|
|||
|
||||
[float]
|
||||
[[bulk-security]]
|
||||
=== Security
|
||||
==== Security
|
||||
|
||||
See <<url-access-control>>.
|
||||
|
||||
[float]
|
||||
[[bulk-partial-responses]]
|
||||
=== Partial responses
|
||||
==== Partial responses
|
||||
To ensure fast responses, the bulk API will respond with partial results if one or more shards fail. See <<shard-failures, Shard failures>> for more information.
|
|
@ -1,5 +1,5 @@
|
|||
[[optimistic-concurrency-control]]
|
||||
== Optimistic concurrency control
|
||||
=== Optimistic concurrency control
|
||||
|
||||
Elasticsearch is distributed. When documents are created, updated, or deleted,
|
||||
the new version of the document has to be replicated to other nodes in the cluster.
|
||||
|
|
|
@ -1,9 +1,9 @@
|
|||
|
||||
[[docs-replication]]
|
||||
== Reading and Writing documents
|
||||
=== Reading and Writing documents
|
||||
|
||||
[float]
|
||||
=== Introduction
|
||||
==== Introduction
|
||||
|
||||
Each index in Elasticsearch is <<scalability,divided into shards>>
|
||||
and each shard can have multiple copies. These copies are known as a _replication group_ and must be kept in sync when documents
|
||||
|
@ -21,7 +21,7 @@ This purpose of this section is to give a high level overview of the Elasticsear
|
|||
it has for various interactions between write and read operations.
|
||||
|
||||
[float]
|
||||
=== Basic write model
|
||||
==== Basic write model
|
||||
|
||||
Every indexing operation in Elasticsearch is first resolved to a replication group using <<index-routing,routing>>,
|
||||
typically based on the document ID. Once the replication group has been determined,
|
||||
|
@ -43,7 +43,7 @@ The primary shard follows this basic flow:
|
|||
completion of the request to the client.
|
||||
|
||||
[float]
|
||||
==== Failure handling
|
||||
===== Failure handling
|
||||
|
||||
Many things can go wrong during indexing -- disks can get corrupted, nodes can be disconnected from each other, or some
|
||||
configuration mistake could cause an operation to fail on a replica despite it being successful on the primary. These
|
||||
|
@ -84,7 +84,7 @@ issues can cause data loss. See <<index-wait-for-active-shards>> for some mitiga
|
|||
************
|
||||
|
||||
[float]
|
||||
=== Basic read model
|
||||
==== Basic read model
|
||||
|
||||
Reads in Elasticsearch can be very lightweight lookups by ID or a heavy search request with complex aggregations that
|
||||
take non-trivial CPU power. One of the beauties of the primary-backup model is that it keeps all shard copies identical
|
||||
|
@ -103,7 +103,7 @@ is as follows:
|
|||
|
||||
[float]
|
||||
[[shard-failures]]
|
||||
==== Shard failures
|
||||
===== Shard failures
|
||||
|
||||
When a shard fails to respond to a read request, the coordinating node sends the
|
||||
request to another shard copy in the same replication group. Repeated failures
|
||||
|
@ -122,7 +122,7 @@ Shard failures are indicated by the `timed_out` and `_shards` fields of
|
|||
the response header.
|
||||
|
||||
[float]
|
||||
=== A few simple implications
|
||||
==== A few simple implications
|
||||
|
||||
Each of these basic flows determines how Elasticsearch behaves as a system for both reads and writes. Furthermore, since read
|
||||
and write requests can be executed concurrently, these two basic flows interact with each other. This has a few inherent implications:
|
||||
|
@ -137,7 +137,7 @@ Two copies by default:: This model can be fault tolerant while maintaining only
|
|||
quorum-based system where the minimum number of copies for fault tolerance is 3.
|
||||
|
||||
[float]
|
||||
=== Failures
|
||||
==== Failures
|
||||
|
||||
Under failures, the following is possible:
|
||||
|
||||
|
@ -151,7 +151,7 @@ Dirty reads:: An isolated primary can expose writes that will not be acknowledge
|
|||
this risk by pinging the master every second (by default) and rejecting indexing operations if no master is known.
|
||||
|
||||
[float]
|
||||
=== The Tip of the Iceberg
|
||||
==== The Tip of the Iceberg
|
||||
|
||||
This document provides a high level overview of how Elasticsearch deals with data. Of course, there is much much more
|
||||
going on under the hood. Things like primary terms, cluster state publishing, and master election all play a role in
|
||||
|
|
|
@ -1,5 +1,5 @@
|
|||
[[docs-delete-by-query]]
|
||||
== Delete By Query API
|
||||
=== Delete By Query API
|
||||
|
||||
The simplest usage of `_delete_by_query` just performs a deletion on every
|
||||
document that matches a query. Here is the API:
|
||||
|
@ -138,7 +138,7 @@ POST twitter/_delete_by_query?scroll_size=5000
|
|||
|
||||
|
||||
[float]
|
||||
=== URL Parameters
|
||||
==== URL Parameters
|
||||
|
||||
In addition to the standard parameters like `pretty`, the delete by query API
|
||||
also supports `refresh`, `wait_for_completion`, `wait_for_active_shards`, `timeout`,
|
||||
|
@ -187,7 +187,7 @@ cause Elasticsearch to create many requests and then wait for a while before
|
|||
starting the next set. This is "bursty" instead of "smooth". The default is `-1`.
|
||||
|
||||
[float]
|
||||
=== Response body
|
||||
==== Response body
|
||||
|
||||
//////////////////////////
|
||||
|
||||
|
@ -294,7 +294,7 @@ version conflicts.
|
|||
|
||||
[float]
|
||||
[[docs-delete-by-query-task-api]]
|
||||
=== Works with the Task API
|
||||
==== Works with the Task API
|
||||
|
||||
You can fetch the status of any running delete by query requests with the
|
||||
<<tasks,Task API>>:
|
||||
|
@ -371,7 +371,7 @@ you to delete that document.
|
|||
|
||||
[float]
|
||||
[[docs-delete-by-query-cancel-task-api]]
|
||||
=== Works with the Cancel Task API
|
||||
==== Works with the Cancel Task API
|
||||
|
||||
Any delete by query can be canceled using the <<tasks,task cancel API>>:
|
||||
|
||||
|
@ -390,7 +390,7 @@ has been cancelled and terminates itself.
|
|||
|
||||
[float]
|
||||
[[docs-delete-by-query-rethrottle]]
|
||||
=== Rethrottling
|
||||
==== Rethrottling
|
||||
|
||||
The value of `requests_per_second` can be changed on a running delete by query
|
||||
using the `_rethrottle` API:
|
||||
|
@ -412,7 +412,7 @@ timeouts.
|
|||
|
||||
[float]
|
||||
[[docs-delete-by-query-slice]]
|
||||
=== Slicing
|
||||
==== Slicing
|
||||
|
||||
Delete by query supports <<sliced-scroll, sliced scroll>> to parallelize the deleting process.
|
||||
This parallelization can improve efficiency and provide a convenient way to
|
||||
|
@ -420,7 +420,7 @@ break the request down into smaller parts.
|
|||
|
||||
[float]
|
||||
[[docs-delete-by-query-manual-slice]]
|
||||
==== Manual slicing
|
||||
===== Manual slicing
|
||||
|
||||
Slice a delete by query manually by providing a slice id and total number of
|
||||
slices to each request:
|
||||
|
@ -495,7 +495,7 @@ Which results in a sensible `total` like this one:
|
|||
|
||||
[float]
|
||||
[[docs-delete-by-query-automatic-slice]]
|
||||
==== Automatic slicing
|
||||
===== Automatic slicing
|
||||
|
||||
You can also let delete-by-query automatically parallelize using
|
||||
<<sliced-scroll, sliced scroll>> to slice on `_id`. Use `slices` to specify the number of
|
||||
|
@ -581,7 +581,7 @@ though these are all taken at approximately the same time.
|
|||
|
||||
[float]
|
||||
[[docs-delete-by-query-picking-slices]]
|
||||
===== Picking the number of slices
|
||||
====== Picking the number of slices
|
||||
|
||||
If slicing automatically, setting `slices` to `auto` will choose a reasonable
|
||||
number for most indices. If you're slicing manually or otherwise tuning
|
||||
|
|
|
@ -1,5 +1,5 @@
|
|||
[[docs-delete]]
|
||||
== Delete API
|
||||
=== Delete API
|
||||
|
||||
The delete API allows to delete a JSON document from a specific
|
||||
index based on its id. The following example deletes the JSON document
|
||||
|
@ -37,7 +37,7 @@ The result of the above delete operation is:
|
|||
|
||||
[float]
|
||||
[[optimistic-concurrency-control-delete]]
|
||||
=== Optimistic concurrency control
|
||||
==== Optimistic concurrency control
|
||||
|
||||
Delete operations can be made conditional and only be performed if the last
|
||||
modification to the document was assigned the sequence number and primary
|
||||
|
@ -47,7 +47,7 @@ and a status code of 409. See <<optimistic-concurrency-control>> for more detail
|
|||
|
||||
[float]
|
||||
[[delete-versioning]]
|
||||
=== Versioning
|
||||
==== Versioning
|
||||
|
||||
Each document indexed is versioned. When deleting a document, the `version` can
|
||||
be specified to make sure the relevant document we are trying to delete is
|
||||
|
@ -60,7 +60,7 @@ determined by the `index.gc_deletes` index setting and defaults to 60 seconds.
|
|||
|
||||
[float]
|
||||
[[delete-routing]]
|
||||
=== Routing
|
||||
==== Routing
|
||||
|
||||
When indexing using the ability to control the routing, in order to
|
||||
delete a document, the routing value should also be provided. For
|
||||
|
@ -97,7 +97,7 @@ the request.
|
|||
|
||||
[float]
|
||||
[[delete-index-creation]]
|
||||
=== Automatic index creation
|
||||
==== Automatic index creation
|
||||
|
||||
If an <<docs-index_,external versioning variant>> is used,
|
||||
the delete operation automatically creates an index if it has not been
|
||||
|
@ -106,7 +106,7 @@ for manually creating an index).
|
|||
|
||||
[float]
|
||||
[[delete-distributed]]
|
||||
=== Distributed
|
||||
==== Distributed
|
||||
|
||||
The delete operation gets hashed into a specific shard id. It then gets
|
||||
redirected into the primary shard within that id group, and replicated
|
||||
|
@ -114,7 +114,7 @@ redirected into the primary shard within that id group, and replicated
|
|||
|
||||
[float]
|
||||
[[delete-wait-for-active-shards]]
|
||||
=== Wait For Active Shards
|
||||
==== Wait For Active Shards
|
||||
|
||||
When making delete requests, you can set the `wait_for_active_shards`
|
||||
parameter to require a minimum number of shard copies to be active
|
||||
|
@ -124,7 +124,7 @@ example.
|
|||
|
||||
[float]
|
||||
[[delete-refresh]]
|
||||
=== Refresh
|
||||
==== Refresh
|
||||
|
||||
Control when the changes made by this request are visible to search. See
|
||||
<<docs-refresh>>.
|
||||
|
@ -132,7 +132,7 @@ Control when the changes made by this request are visible to search. See
|
|||
|
||||
[float]
|
||||
[[delete-timeout]]
|
||||
=== Timeout
|
||||
==== Timeout
|
||||
|
||||
The primary shard assigned to perform the delete operation might not be
|
||||
available when the delete operation is executed. Some reasons for this
|
||||
|
|
|
@ -1,5 +1,5 @@
|
|||
[[docs-get]]
|
||||
== Get API
|
||||
=== Get API
|
||||
|
||||
The get API allows to get a JSON document from the index based on
|
||||
its id. The following example gets a JSON document from an index called
|
||||
|
@ -51,7 +51,7 @@ HEAD twitter/_doc/0
|
|||
|
||||
[float]
|
||||
[[realtime]]
|
||||
=== Realtime
|
||||
==== Realtime
|
||||
|
||||
By default, the get API is realtime, and is not affected by the refresh
|
||||
rate of the index (when data will become visible for search). If a document
|
||||
|
@ -62,7 +62,7 @@ one can set the `realtime` parameter to `false`.
|
|||
|
||||
[float]
|
||||
[[get-source-filtering]]
|
||||
=== Source filtering
|
||||
==== Source filtering
|
||||
|
||||
By default, the get operation returns the contents of the `_source` field unless
|
||||
you have used the `stored_fields` parameter or if the `_source` field is disabled.
|
||||
|
@ -98,7 +98,7 @@ GET twitter/_doc/0?_source=*.id,retweeted
|
|||
|
||||
[float]
|
||||
[[get-stored-fields]]
|
||||
=== Stored Fields
|
||||
==== Stored Fields
|
||||
|
||||
The get operation allows specifying a set of stored fields that will be
|
||||
returned by passing the `stored_fields` parameter.
|
||||
|
@ -219,7 +219,7 @@ will fail.
|
|||
|
||||
[float]
|
||||
[[_source]]
|
||||
=== Getting the +_source+ directly
|
||||
==== Getting the +_source+ directly
|
||||
|
||||
Use the `/{index}/_source/{id}` endpoint to get
|
||||
just the `_source` field of the document,
|
||||
|
@ -253,7 +253,7 @@ HEAD twitter/_source/1
|
|||
|
||||
[float]
|
||||
[[get-routing]]
|
||||
=== Routing
|
||||
==== Routing
|
||||
|
||||
When indexing using the ability to control the routing, in order to get
|
||||
a document, the routing value should also be provided. For example:
|
||||
|
@ -271,7 +271,7 @@ document not to be fetched.
|
|||
|
||||
[float]
|
||||
[[preference]]
|
||||
=== Preference
|
||||
==== Preference
|
||||
|
||||
Controls a `preference` of which shard replicas to execute the get
|
||||
request on. By default, the operation is randomized between the shard
|
||||
|
@ -292,7 +292,7 @@ Custom (string) value::
|
|||
|
||||
[float]
|
||||
[[get-refresh]]
|
||||
=== Refresh
|
||||
==== Refresh
|
||||
|
||||
The `refresh` parameter can be set to `true` in order to refresh the
|
||||
relevant shard before the get operation and make it searchable. Setting
|
||||
|
@ -302,7 +302,7 @@ indexing).
|
|||
|
||||
[float]
|
||||
[[get-distributed]]
|
||||
=== Distributed
|
||||
==== Distributed
|
||||
|
||||
The get operation gets hashed into a specific shard id. It then gets
|
||||
redirected to one of the replicas within that shard id and returns the
|
||||
|
@ -313,7 +313,7 @@ better GET scaling we will have.
|
|||
|
||||
[float]
|
||||
[[get-versioning]]
|
||||
=== Versioning support
|
||||
==== Versioning support
|
||||
|
||||
You can use the `version` parameter to retrieve the document only if
|
||||
its current version is equal to the specified one. This behavior is the same
|
||||
|
|
|
@ -1,5 +1,5 @@
|
|||
[[docs-index_]]
|
||||
== Index API
|
||||
=== Index API
|
||||
|
||||
IMPORTANT: See <<removal-of-types>>.
|
||||
|
||||
|
@ -54,7 +54,7 @@ NOTE: Replica shards may not all be started when an indexing operation success
|
|||
|
||||
[float]
|
||||
[[index-creation]]
|
||||
=== Automatic Index Creation
|
||||
==== Automatic Index Creation
|
||||
|
||||
The index operation automatically creates an index if it does not already
|
||||
exist, and applies any <<indices-templates,index templates>> that are
|
||||
|
@ -108,7 +108,7 @@ patterns are matched in the order in which they are given.
|
|||
|
||||
[float]
|
||||
[[operation-type]]
|
||||
=== Operation Type
|
||||
==== Operation Type
|
||||
|
||||
The index operation also accepts an `op_type` that can be used to force
|
||||
a `create` operation, allowing for "put-if-absent" behavior. When
|
||||
|
@ -142,7 +142,7 @@ PUT twitter/_create/1
|
|||
// CONSOLE
|
||||
|
||||
[float]
|
||||
=== Automatic ID Generation
|
||||
==== Automatic ID Generation
|
||||
|
||||
The index operation can be executed without specifying the id. In such a
|
||||
case, an id will be generated automatically. In addition, the `op_type`
|
||||
|
@ -183,7 +183,7 @@ The result of the above index operation is:
|
|||
|
||||
[float]
|
||||
[[optimistic-concurrency-control-index]]
|
||||
=== Optimistic concurrency control
|
||||
==== Optimistic concurrency control
|
||||
|
||||
Index operations can be made conditional and only be performed if the last
|
||||
modification to the document was assigned the sequence number and primary
|
||||
|
@ -193,7 +193,7 @@ and a status code of 409. See <<optimistic-concurrency-control>> for more detail
|
|||
|
||||
[float]
|
||||
[[index-routing]]
|
||||
=== Routing
|
||||
==== Routing
|
||||
|
||||
By default, shard placement ? or `routing` ? is controlled by using a
|
||||
hash of the document's id value. For more explicit control, the value
|
||||
|
@ -223,7 +223,7 @@ value is provided or extracted.
|
|||
|
||||
[float]
|
||||
[[index-distributed]]
|
||||
=== Distributed
|
||||
==== Distributed
|
||||
|
||||
The index operation is directed to the primary shard based on its route
|
||||
(see the Routing section above) and performed on the actual node
|
||||
|
@ -232,7 +232,7 @@ if needed, the update is distributed to applicable replicas.
|
|||
|
||||
[float]
|
||||
[[index-wait-for-active-shards]]
|
||||
=== Wait For Active Shards
|
||||
==== Wait For Active Shards
|
||||
|
||||
To improve the resiliency of writes to the system, indexing operations
|
||||
can be configured to wait for a certain number of active shard copies
|
||||
|
@ -290,14 +290,14 @@ replication succeeded/failed.
|
|||
|
||||
[float]
|
||||
[[index-refresh]]
|
||||
=== Refresh
|
||||
==== Refresh
|
||||
|
||||
Control when the changes made by this request are visible to search. See
|
||||
<<docs-refresh,refresh>>.
|
||||
|
||||
[float]
|
||||
[[index-noop]]
|
||||
=== Noop Updates
|
||||
==== Noop Updates
|
||||
|
||||
When updating a document using the index API a new version of the document is
|
||||
always created even if the document hasn't changed. If this isn't acceptable
|
||||
|
@ -312,7 +312,7 @@ Elasticsearch runs on the shard receiving the updates.
|
|||
|
||||
[float]
|
||||
[[timeout]]
|
||||
=== Timeout
|
||||
==== Timeout
|
||||
|
||||
The primary shard assigned to perform the index operation might not be
|
||||
available when the index operation is executed. Some reasons for this
|
||||
|
@ -336,7 +336,7 @@ PUT twitter/_doc/1?timeout=5m
|
|||
|
||||
[float]
|
||||
[[index-versioning]]
|
||||
=== Versioning
|
||||
==== Versioning
|
||||
|
||||
Each indexed document is given a version number. By default,
|
||||
internal versioning is used that starts at 1 and increments
|
||||
|
@ -381,7 +381,7 @@ latest version will be used if the index operations arrive out of order for
|
|||
whatever reason.
|
||||
|
||||
[float]
|
||||
==== Version types
|
||||
===== Version types
|
||||
|
||||
Next to the `external` version type explained above, Elasticsearch
|
||||
also supports other types for specific use cases. Here is an overview of
|
||||
|
|
|
@ -1,5 +1,5 @@
|
|||
[[docs-multi-get]]
|
||||
== Multi Get API
|
||||
=== Multi Get API
|
||||
|
||||
The Multi get API returns multiple documents based on an index, type,
|
||||
(optional) and id (and possibly routing). The response includes a `docs` array
|
||||
|
@ -84,7 +84,7 @@ GET /test/_doc/_mget
|
|||
|
||||
[float]
|
||||
[[mget-source-filtering]]
|
||||
=== Source filtering
|
||||
==== Source filtering
|
||||
|
||||
By default, the `_source` field will be returned for every document (if stored).
|
||||
Similar to the <<get-source-filtering,get>> API, you can retrieve only parts of
|
||||
|
@ -128,7 +128,7 @@ GET /_mget
|
|||
|
||||
[float]
|
||||
[[mget-fields]]
|
||||
=== Fields
|
||||
==== Fields
|
||||
|
||||
Specific stored fields can be specified to be retrieved per document to get, similar to the <<get-stored-fields,stored_fields>> parameter of the Get API.
|
||||
For example:
|
||||
|
@ -179,7 +179,7 @@ GET /test/_doc/_mget?stored_fields=field1,field2
|
|||
|
||||
[float]
|
||||
[[mget-routing]]
|
||||
=== Routing
|
||||
==== Routing
|
||||
|
||||
You can also specify a routing value as a parameter:
|
||||
|
||||
|
@ -209,11 +209,11 @@ document `test/_doc/1` will be fetched from the shard corresponding to routing k
|
|||
|
||||
[float]
|
||||
[[mget-security]]
|
||||
=== Security
|
||||
==== Security
|
||||
|
||||
See <<url-access-control>>.
|
||||
|
||||
[float]
|
||||
[[multi-get-partial-responses]]
|
||||
=== Partial responses
|
||||
==== Partial responses
|
||||
To ensure fast responses, the multi get API will respond with partial results if one or more shards fail. See <<shard-failures, Shard failures>> for more information.
|
|
@ -1,5 +1,5 @@
|
|||
[[docs-multi-termvectors]]
|
||||
== Multi termvectors API
|
||||
=== Multi termvectors API
|
||||
|
||||
Multi termvectors API allows to get multiple termvectors at once. The
|
||||
documents from which to retrieve the term vectors are specified by an index and id.
|
||||
|
|
|
@ -1,5 +1,5 @@
|
|||
[[docs-refresh]]
|
||||
== `?refresh`
|
||||
=== `?refresh`
|
||||
|
||||
The <<docs-index_,Index>>, <<docs-update,Update>>, <<docs-delete,Delete>>, and
|
||||
<<docs-bulk,Bulk>> APIs support setting `refresh` to control when changes made
|
||||
|
@ -30,7 +30,7 @@ Take no refresh related actions. The changes made by this request will be made
|
|||
visible at some point after the request returns.
|
||||
|
||||
[float]
|
||||
=== Choosing which setting to use
|
||||
==== Choosing which setting to use
|
||||
|
||||
Unless you have a good reason to wait for the change to become visible always
|
||||
use `refresh=false`, or, because that is the default, just leave the `refresh`
|
||||
|
@ -64,7 +64,7 @@ general, if you have a running system you don't wish to disturb then
|
|||
|
||||
[float]
|
||||
[[refresh_wait_for-force-refresh]]
|
||||
=== `refresh=wait_for` Can Force a Refresh
|
||||
==== `refresh=wait_for` Can Force a Refresh
|
||||
|
||||
If a `refresh=wait_for` request comes in when there are already
|
||||
`index.max_refresh_listeners` (defaults to 1000) requests waiting for a refresh
|
||||
|
@ -79,7 +79,7 @@ Bulk requests only take up one slot on each shard that they touch no matter how
|
|||
many times they modify the shard.
|
||||
|
||||
[float]
|
||||
=== Examples
|
||||
==== Examples
|
||||
|
||||
These will create a document and immediately refresh the index so it is visible:
|
||||
|
||||
|
|
|
@ -1,5 +1,5 @@
|
|||
[[docs-reindex]]
|
||||
== Reindex API
|
||||
=== Reindex API
|
||||
|
||||
IMPORTANT: Reindex requires <<mapping-source-field,`_source`>> to be enabled for
|
||||
all documents in the source index.
|
||||
|
@ -392,7 +392,7 @@ POST _reindex
|
|||
|
||||
[float]
|
||||
[[reindex-from-remote]]
|
||||
=== Reindex from Remote
|
||||
==== Reindex from Remote
|
||||
|
||||
Reindex supports reindexing from a remote Elasticsearch cluster:
|
||||
|
||||
|
@ -525,7 +525,7 @@ POST _reindex
|
|||
|
||||
[float]
|
||||
[[reindex-ssl]]
|
||||
==== Configuring SSL parameters
|
||||
===== Configuring SSL parameters
|
||||
|
||||
Reindex from remote supports configurable SSL settings. These must be
|
||||
specified in the `elasticsearch.yml` file, with the exception of the
|
||||
|
@ -617,7 +617,7 @@ Defaults to the keystore password. This setting cannot be used with
|
|||
`reindex.ssl.keystore.key_password`.
|
||||
|
||||
[float]
|
||||
=== URL Parameters
|
||||
==== URL Parameters
|
||||
|
||||
In addition to the standard parameters like `pretty`, the Reindex API also
|
||||
supports `refresh`, `wait_for_completion`, `wait_for_active_shards`, `timeout`,
|
||||
|
@ -667,7 +667,7 @@ starting the next set. This is "bursty" instead of "smooth". The default value i
|
|||
|
||||
[float]
|
||||
[[docs-reindex-response-body]]
|
||||
=== Response body
|
||||
==== Response body
|
||||
|
||||
//////////////////////////
|
||||
[source,js]
|
||||
|
@ -781,7 +781,7 @@ the `conflicts` option to prevent reindex from aborting on version conflicts.
|
|||
|
||||
[float]
|
||||
[[docs-reindex-task-api]]
|
||||
=== Works with the Task API
|
||||
==== Works with the Task API
|
||||
|
||||
You can fetch the status of all running reindex requests with the
|
||||
<<tasks,Task API>>:
|
||||
|
@ -868,7 +868,7 @@ you to delete that document.
|
|||
|
||||
[float]
|
||||
[[docs-reindex-cancel-task-api]]
|
||||
=== Works with the Cancel Task API
|
||||
==== Works with the Cancel Task API
|
||||
|
||||
Any reindex can be canceled using the <<task-cancellation,Task Cancel API>>. For
|
||||
example:
|
||||
|
@ -887,7 +887,7 @@ API will continue to list the task until it wakes to cancel itself.
|
|||
|
||||
[float]
|
||||
[[docs-reindex-rethrottle]]
|
||||
=== Rethrottling
|
||||
==== Rethrottling
|
||||
|
||||
The value of `requests_per_second` can be changed on a running reindex using
|
||||
the `_rethrottle` API:
|
||||
|
@ -909,7 +909,7 @@ timeouts.
|
|||
|
||||
[float]
|
||||
[[docs-reindex-change-name]]
|
||||
=== Reindex to change the name of a field
|
||||
==== Reindex to change the name of a field
|
||||
|
||||
`_reindex` can be used to build a copy of an index with renamed fields. Say you
|
||||
create an index containing documents that look like this:
|
||||
|
@ -976,7 +976,7 @@ which will return:
|
|||
|
||||
[float]
|
||||
[[docs-reindex-slice]]
|
||||
=== Slicing
|
||||
==== Slicing
|
||||
|
||||
Reindex supports <<sliced-scroll>> to parallelize the reindexing process.
|
||||
This parallelization can improve efficiency and provide a convenient way to
|
||||
|
@ -984,7 +984,7 @@ break the request down into smaller parts.
|
|||
|
||||
[float]
|
||||
[[docs-reindex-manual-slice]]
|
||||
==== Manual slicing
|
||||
===== Manual slicing
|
||||
Slice a reindex request manually by providing a slice id and total number of
|
||||
slices to each request:
|
||||
|
||||
|
@ -1047,7 +1047,7 @@ which results in a sensible `total` like this one:
|
|||
|
||||
[float]
|
||||
[[docs-reindex-automatic-slice]]
|
||||
==== Automatic slicing
|
||||
===== Automatic slicing
|
||||
|
||||
You can also let `_reindex` automatically parallelize using <<sliced-scroll>> to
|
||||
slice on `_uid`. Use `slices` to specify the number of slices to use:
|
||||
|
@ -1121,7 +1121,7 @@ though these are all taken at approximately the same time.
|
|||
|
||||
[float]
|
||||
[[docs-reindex-picking-slices]]
|
||||
===== Picking the number of slices
|
||||
====== Picking the number of slices
|
||||
|
||||
If slicing automatically, setting `slices` to `auto` will choose a reasonable
|
||||
number for most indices. If slicing manually or otherwise tuning
|
||||
|
@ -1140,7 +1140,7 @@ Whether query or indexing performance dominates the runtime depends on the
|
|||
documents being reindexed and cluster resources.
|
||||
|
||||
[float]
|
||||
=== Reindexing many indices
|
||||
==== Reindexing many indices
|
||||
If you have many indices to reindex it is generally better to reindex them
|
||||
one at a time rather than using a glob pattern to pick up many indices. That
|
||||
way you can resume the process if there are any errors by removing the
|
||||
|
@ -1166,7 +1166,7 @@ done
|
|||
// NOTCONSOLE
|
||||
|
||||
[float]
|
||||
=== Reindex daily indices
|
||||
==== Reindex daily indices
|
||||
|
||||
Notwithstanding the above advice, you can use `_reindex` in combination with
|
||||
<<modules-scripting-painless, Painless>> to reindex daily indices to apply
|
||||
|
@ -1224,7 +1224,7 @@ The previous method can also be used in conjunction with <<docs-reindex-change-n
|
|||
to load only the existing data into the new index and rename any fields if needed.
|
||||
|
||||
[float]
|
||||
=== Extracting a random subset of an index
|
||||
==== Extracting a random subset of an index
|
||||
|
||||
`_reindex` can be used to extract a random subset of an index for testing:
|
||||
|
||||
|
|
|
@ -1,5 +1,5 @@
|
|||
[[docs-termvectors]]
|
||||
== Term Vectors
|
||||
=== Term Vectors
|
||||
|
||||
Returns information and statistics on terms in the fields of a particular
|
||||
document. The document could be stored in the index or artificially provided
|
||||
|
@ -28,14 +28,14 @@ example below). Fields can also be specified with wildcards
|
|||
in similar way to the <<query-dsl-multi-match-query,multi match query>>
|
||||
|
||||
[float]
|
||||
=== Return values
|
||||
==== Return values
|
||||
|
||||
Three types of values can be requested: _term information_, _term statistics_
|
||||
and _field statistics_. By default, all term information and field
|
||||
statistics are returned for all fields but no term statistics.
|
||||
|
||||
[float]
|
||||
==== Term information
|
||||
===== Term information
|
||||
|
||||
* term frequency in the field (always returned)
|
||||
* term positions (`positions` : true)
|
||||
|
@ -55,7 +55,7 @@ using UTF-16.
|
|||
======
|
||||
|
||||
[float]
|
||||
==== Term statistics
|
||||
===== Term statistics
|
||||
|
||||
Setting `term_statistics` to `true` (default is `false`) will
|
||||
return
|
||||
|
@ -68,7 +68,7 @@ By default these values are not returned since term statistics can
|
|||
have a serious performance impact.
|
||||
|
||||
[float]
|
||||
==== Field statistics
|
||||
===== Field statistics
|
||||
|
||||
Setting `field_statistics` to `false` (default is `true`) will
|
||||
omit :
|
||||
|
@ -80,7 +80,7 @@ omit :
|
|||
each term in this field)
|
||||
|
||||
[float]
|
||||
==== Terms Filtering
|
||||
===== Terms Filtering
|
||||
|
||||
With the parameter `filter`, the terms returned could also be filtered based
|
||||
on their tf-idf scores. This could be useful in order find out a good
|
||||
|
@ -108,7 +108,7 @@ The following sub-parameters are supported:
|
|||
The maximum word length above which words will be ignored. Defaults to unbounded (`0`).
|
||||
|
||||
[float]
|
||||
=== Behaviour
|
||||
==== Behaviour
|
||||
|
||||
The term and field statistics are not accurate. Deleted documents
|
||||
are not taken into account. The information is only retrieved for the
|
||||
|
@ -119,7 +119,7 @@ when requesting term vectors of artificial documents, a shard to get the statist
|
|||
from is randomly selected. Use `routing` only to hit a particular shard.
|
||||
|
||||
[float]
|
||||
==== Example: Returning stored term vectors
|
||||
===== Example: Returning stored term vectors
|
||||
|
||||
First, we create an index that stores term vectors, payloads etc. :
|
||||
|
||||
|
@ -266,7 +266,7 @@ Response:
|
|||
// TESTRESPONSE[s/"took": 6/"took": "$body.took"/]
|
||||
|
||||
[float]
|
||||
==== Example: Generating term vectors on the fly
|
||||
===== Example: Generating term vectors on the fly
|
||||
|
||||
Term vectors which are not explicitly stored in the index are automatically
|
||||
computed on the fly. The following request returns all information and statistics for the
|
||||
|
@ -289,7 +289,7 @@ GET /twitter/_termvectors/1
|
|||
|
||||
[[docs-termvectors-artificial-doc]]
|
||||
[float]
|
||||
==== Example: Artificial documents
|
||||
===== Example: Artificial documents
|
||||
|
||||
Term vectors can also be generated for artificial documents,
|
||||
that is for documents not present in the index. For example, the following request would
|
||||
|
@ -313,7 +313,7 @@ GET /twitter/_termvectors
|
|||
|
||||
[[docs-termvectors-per-field-analyzer]]
|
||||
[float]
|
||||
===== Per-field analyzer
|
||||
====== Per-field analyzer
|
||||
|
||||
Additionally, a different analyzer than the one at the field may be provided
|
||||
by using the `per_field_analyzer` parameter. This is useful in order to
|
||||
|
@ -380,7 +380,7 @@ Response:
|
|||
|
||||
[[docs-termvectors-terms-filtering]]
|
||||
[float]
|
||||
==== Example: Terms filtering
|
||||
===== Example: Terms filtering
|
||||
|
||||
Finally, the terms returned could be filtered based on their tf-idf scores. In
|
||||
the example below we obtain the three most "interesting" keywords from the
|
||||
|
|
|
@ -1,5 +1,5 @@
|
|||
[[docs-update-by-query]]
|
||||
== Update By Query API
|
||||
=== Update By Query API
|
||||
|
||||
The simplest usage of `_update_by_query` just performs an update on every
|
||||
document in the index without changing the source. This is useful to
|
||||
|
@ -196,7 +196,7 @@ POST twitter/_update_by_query?pipeline=set-foo
|
|||
// TEST[setup:twitter]
|
||||
|
||||
[float]
|
||||
=== URL Parameters
|
||||
==== URL Parameters
|
||||
|
||||
In addition to the standard parameters like `pretty`, the Update By Query API
|
||||
also supports `refresh`, `wait_for_completion`, `wait_for_active_shards`, `timeout`,
|
||||
|
@ -246,7 +246,7 @@ starting the next set. This is "bursty" instead of "smooth". The default is `-1`
|
|||
|
||||
[float]
|
||||
[[docs-update-by-query-response-body]]
|
||||
=== Response body
|
||||
==== Response body
|
||||
|
||||
//////////////////////////
|
||||
[source,js]
|
||||
|
@ -351,7 +351,7 @@ version conflicts.
|
|||
|
||||
[float]
|
||||
[[docs-update-by-query-task-api]]
|
||||
=== Works with the Task API
|
||||
==== Works with the Task API
|
||||
|
||||
You can fetch the status of all running update by query requests with the
|
||||
<<tasks,Task API>>:
|
||||
|
@ -433,7 +433,7 @@ you to delete that document.
|
|||
|
||||
[float]
|
||||
[[docs-update-by-query-cancel-task-api]]
|
||||
=== Works with the Cancel Task API
|
||||
==== Works with the Cancel Task API
|
||||
|
||||
Any update by query can be cancelled using the <<tasks,Task Cancel API>>:
|
||||
|
||||
|
@ -452,7 +452,7 @@ that it has been cancelled and terminates itself.
|
|||
|
||||
[float]
|
||||
[[docs-update-by-query-rethrottle]]
|
||||
=== Rethrottling
|
||||
==== Rethrottling
|
||||
|
||||
The value of `requests_per_second` can be changed on a running update by query
|
||||
using the `_rethrottle` API:
|
||||
|
@ -474,7 +474,7 @@ timeouts.
|
|||
|
||||
[float]
|
||||
[[docs-update-by-query-slice]]
|
||||
=== Slicing
|
||||
==== Slicing
|
||||
|
||||
Update by query supports <<sliced-scroll>> to parallelize the updating process.
|
||||
This parallelization can improve efficiency and provide a convenient way to
|
||||
|
@ -482,7 +482,7 @@ break the request down into smaller parts.
|
|||
|
||||
[float]
|
||||
[[docs-update-by-query-manual-slice]]
|
||||
==== Manual slicing
|
||||
===== Manual slicing
|
||||
Slice an update by query manually by providing a slice id and total number of
|
||||
slices to each request:
|
||||
|
||||
|
@ -539,7 +539,7 @@ Which results in a sensible `total` like this one:
|
|||
|
||||
[float]
|
||||
[[docs-update-by-query-automatic-slice]]
|
||||
==== Automatic slicing
|
||||
===== Automatic slicing
|
||||
|
||||
You can also let update by query automatically parallelize using
|
||||
<<sliced-scroll>> to slice on `_id`. Use `slices` to specify the number of
|
||||
|
@ -612,7 +612,7 @@ though these are all taken at approximately the same time.
|
|||
|
||||
[float]
|
||||
[[docs-update-by-query-picking-slices]]
|
||||
===== Picking the number of slices
|
||||
====== Picking the number of slices
|
||||
|
||||
If slicing automatically, setting `slices` to `auto` will choose a reasonable
|
||||
number for most indices. If you're slicing manually or otherwise tuning
|
||||
|
@ -632,7 +632,7 @@ documents being reindexed and cluster resources.
|
|||
|
||||
[float]
|
||||
[[picking-up-a-new-property]]
|
||||
=== Pick up a new property
|
||||
==== Pick up a new property
|
||||
|
||||
Say you created an index without dynamic mapping, filled it with data, and then
|
||||
added a mapping value to pick up more fields from the data:
|
||||
|
|
|
@ -1,5 +1,5 @@
|
|||
[[docs-update]]
|
||||
== Update API
|
||||
=== Update API
|
||||
|
||||
The update API allows to update a document based on a script provided.
|
||||
The operation gets the document (collocated with the shard) from the
|
||||
|
@ -25,7 +25,7 @@ PUT test/_doc/1
|
|||
// CONSOLE
|
||||
|
||||
[float]
|
||||
=== Scripted updates
|
||||
==== Scripted updates
|
||||
|
||||
Now, we can execute a script that would increment the counter:
|
||||
|
||||
|
@ -135,7 +135,7 @@ POST test/_update/1
|
|||
// TEST[continued]
|
||||
|
||||
[float]
|
||||
=== Updates with a partial document
|
||||
==== Updates with a partial document
|
||||
|
||||
The update API also supports passing a partial document,
|
||||
which will be merged into the existing document (simple recursive merge,
|
||||
|
@ -161,7 +161,7 @@ If both `doc` and `script` are specified, then `doc` is ignored. Best is
|
|||
to put your field pairs of the partial document in the script itself.
|
||||
|
||||
[float]
|
||||
=== Detecting noop updates
|
||||
==== Detecting noop updates
|
||||
|
||||
If `doc` is specified its value is merged with the existing `_source`.
|
||||
By default updates that don't change anything detect that they don't change anything and return `"result": "noop"` like this:
|
||||
|
@ -216,7 +216,7 @@ POST test/_update/1
|
|||
|
||||
[[upserts]]
|
||||
[float]
|
||||
=== Upserts
|
||||
==== Upserts
|
||||
|
||||
If the document does not already exist, the contents of the `upsert` element
|
||||
will be inserted as a new document. If the document does exist, then the
|
||||
|
@ -243,7 +243,7 @@ POST test/_update/1
|
|||
|
||||
[float]
|
||||
[[scripted_upsert]]
|
||||
==== `scripted_upsert`
|
||||
===== `scripted_upsert`
|
||||
|
||||
If you would like your script to run regardless of whether the document exists
|
||||
or not -- i.e. the script handles initializing the document instead of the
|
||||
|
@ -273,7 +273,7 @@ POST sessions/_update/dh3sgudg8gsrgl
|
|||
|
||||
[float]
|
||||
[[doc_as_upsert]]
|
||||
==== `doc_as_upsert`
|
||||
===== `doc_as_upsert`
|
||||
|
||||
Instead of sending a partial `doc` plus an `upsert` doc, setting
|
||||
`doc_as_upsert` to `true` will use the contents of `doc` as the `upsert`
|
||||
|
@ -293,7 +293,7 @@ POST test/_update/1
|
|||
// TEST[continued]
|
||||
|
||||
[float]
|
||||
=== Parameters
|
||||
==== Parameters
|
||||
|
||||
The update operation supports the following query-string parameters:
|
||||
|
||||
|
|
|
@ -1,7 +1,7 @@
|
|||
[role="xpack"]
|
||||
[testenv="basic"]
|
||||
[[snapshot-lifecycle-management-api]]
|
||||
== Snapshot Lifecycle Management API
|
||||
== Snapshot lifecycle management API
|
||||
|
||||
The Snapshot Lifecycle Management APIs are used to manage policies for the time
|
||||
and frequency of automatic snapshots. Snapshot Lifecycle Management is related
|
||||
|
|
|
@ -1,12 +1,12 @@
|
|||
[[indices]]
|
||||
= Indices APIs
|
||||
== Index APIs
|
||||
|
||||
Index APIs are used to manage individual indices,
|
||||
index settings, aliases, mappings, and index templates.
|
||||
|
||||
[float]
|
||||
[[index-management]]
|
||||
== Index management:
|
||||
=== Index management:
|
||||
|
||||
* <<indices-create-index>>
|
||||
* <<indices-delete-index>>
|
||||
|
@ -22,7 +22,7 @@ index settings, aliases, mappings, and index templates.
|
|||
|
||||
[float]
|
||||
[[mapping-management]]
|
||||
== Mapping management:
|
||||
=== Mapping management:
|
||||
|
||||
* <<indices-put-mapping>>
|
||||
* <<indices-get-mapping>>
|
||||
|
@ -31,12 +31,12 @@ index settings, aliases, mappings, and index templates.
|
|||
|
||||
[float]
|
||||
[[alias-management]]
|
||||
== Alias management:
|
||||
=== Alias management:
|
||||
* <<indices-aliases>>
|
||||
|
||||
[float]
|
||||
[[index-settings]]
|
||||
== Index settings:
|
||||
=== Index settings:
|
||||
* <<indices-update-settings>>
|
||||
* <<indices-get-settings>>
|
||||
* <<indices-analyze>>
|
||||
|
@ -44,7 +44,7 @@ index settings, aliases, mappings, and index templates.
|
|||
|
||||
[float]
|
||||
[[monitoring]]
|
||||
== Monitoring:
|
||||
=== Monitoring:
|
||||
* <<indices-stats>>
|
||||
* <<indices-segments>>
|
||||
* <<indices-recovery>>
|
||||
|
@ -52,7 +52,7 @@ index settings, aliases, mappings, and index templates.
|
|||
|
||||
[float]
|
||||
[[status-management]]
|
||||
== Status management:
|
||||
=== Status management:
|
||||
* <<indices-clearcache>>
|
||||
* <<indices-refresh>>
|
||||
* <<indices-flush>>
|
||||
|
@ -74,12 +74,10 @@ include::indices/split-index.asciidoc[]
|
|||
|
||||
include::indices/rollover-index.asciidoc[]
|
||||
|
||||
:leveloffset: -1
|
||||
include::indices/apis/freeze.asciidoc[]
|
||||
|
||||
include::indices/apis/unfreeze.asciidoc[]
|
||||
|
||||
:leveloffset: +1
|
||||
include::indices/put-mapping.asciidoc[]
|
||||
|
||||
include::indices/get-mapping.asciidoc[]
|
||||
|
|
|
@ -1,5 +1,5 @@
|
|||
[[indices-aliases]]
|
||||
== Index Aliases
|
||||
=== Index Aliases
|
||||
|
||||
APIs in Elasticsearch accept an index name when working against a
|
||||
specific index, and several indices when applicable. The index aliases
|
||||
|
@ -130,7 +130,7 @@ POST /_aliases
|
|||
|
||||
[float]
|
||||
[[filtered]]
|
||||
=== Filtered Aliases
|
||||
==== Filtered Aliases
|
||||
|
||||
Aliases with filters provide an easy way to create different "views" of
|
||||
the same index. The filter can be defined using Query DSL and is applied
|
||||
|
@ -177,7 +177,7 @@ POST /_aliases
|
|||
|
||||
[float]
|
||||
[[aliases-routing]]
|
||||
==== Routing
|
||||
===== Routing
|
||||
|
||||
It is possible to associate routing values with aliases. This feature
|
||||
can be used together with filtering aliases in order to avoid
|
||||
|
@ -244,7 +244,7 @@ GET /alias2/_search?q=user:kimchy&routing=2,3
|
|||
|
||||
[float]
|
||||
[[aliases-write-index]]
|
||||
==== Write Index
|
||||
===== Write Index
|
||||
|
||||
It is possible to associate the index pointed to by an alias as the write index.
|
||||
When specified, all index and update requests against an alias that point to multiple
|
||||
|
@ -342,7 +342,7 @@ writes will be rejected.
|
|||
|
||||
[float]
|
||||
[[alias-adding]]
|
||||
=== Add a single alias
|
||||
==== Add a single alias
|
||||
|
||||
An alias can also be added with the endpoint
|
||||
|
||||
|
@ -360,7 +360,7 @@ where
|
|||
You can also use the plural `_aliases`.
|
||||
|
||||
[float]
|
||||
==== Examples:
|
||||
===== Examples:
|
||||
|
||||
Adding time based alias::
|
||||
+
|
||||
|
@ -412,7 +412,7 @@ PUT /users/_alias/user_12
|
|||
|
||||
[float]
|
||||
[[alias-index-creation]]
|
||||
=== Aliases during index creation
|
||||
==== Aliases during index creation
|
||||
|
||||
Aliases can also be specified during <<create-index-aliases,index creation>>:
|
||||
|
||||
|
@ -439,7 +439,7 @@ PUT /logs_20162801
|
|||
|
||||
[float]
|
||||
[[deleting]]
|
||||
=== Delete aliases
|
||||
==== Delete aliases
|
||||
|
||||
|
||||
The rest endpoint is: `/{index}/_alias/{name}`
|
||||
|
@ -461,7 +461,7 @@ DELETE /logs_20162801/_alias/current_day
|
|||
|
||||
[float]
|
||||
[[alias-retrieving]]
|
||||
=== Retrieving existing aliases
|
||||
==== Retrieving existing aliases
|
||||
|
||||
The get index alias API allows to filter by
|
||||
alias name and index name. This api redirects to the master and fetches
|
||||
|
@ -487,7 +487,7 @@ Possible options:
|
|||
The rest endpoint is: `/{index}/_alias/{alias}`.
|
||||
|
||||
[float]
|
||||
==== Examples:
|
||||
===== Examples:
|
||||
|
||||
All aliases for the index `logs_20162801`:
|
||||
|
||||
|
|
|
@ -1,5 +1,5 @@
|
|||
[[indices-analyze]]
|
||||
== Analyze
|
||||
=== Analyze
|
||||
|
||||
Performs the analysis process on a text and return the tokens breakdown
|
||||
of the text.
|
||||
|
@ -140,7 +140,7 @@ GET _analyze
|
|||
// CONSOLE
|
||||
|
||||
[[explain-analyze-api]]
|
||||
=== Explain Analyze
|
||||
==== Explain Analyze
|
||||
|
||||
If you want to get more advanced details, set `explain` to `true` (defaults to `false`). It will output all token attributes for each token.
|
||||
You can filter token attributes you want to output by setting `attributes` option.
|
||||
|
@ -211,7 +211,7 @@ The request returns the following result:
|
|||
|
||||
[[tokens-limit-settings]]
|
||||
[float]
|
||||
== Settings to prevent tokens explosion
|
||||
=== Settings to prevent tokens explosion
|
||||
Generating excessive amount of tokens may cause a node to run out of memory.
|
||||
The following setting allows to limit the number of tokens that can be produced:
|
||||
|
||||
|
|
|
@ -1,5 +1,5 @@
|
|||
[[indices-clearcache]]
|
||||
== Clear Cache
|
||||
=== Clear Cache
|
||||
|
||||
The clear cache API allows to clear either all caches or specific cached
|
||||
associated with one or more indices.
|
||||
|
@ -40,7 +40,7 @@ POST /twitter/_cache/clear?fields=foo,bar <1>
|
|||
<1> Clear the cache for the `foo` an `bar` field
|
||||
|
||||
[float]
|
||||
=== Multi Index
|
||||
==== Multi Index
|
||||
|
||||
The clear cache API can be applied to more than one index with a single
|
||||
call, or even on `_all` the indices.
|
||||
|
|
|
@ -1,5 +1,5 @@
|
|||
[[indices-create-index]]
|
||||
== Create Index
|
||||
=== Create Index
|
||||
|
||||
The Create Index API is used to manually create an index in Elasticsearch. All documents in Elasticsearch
|
||||
are stored inside of one index or another.
|
||||
|
@ -30,7 +30,7 @@ There are several limitations to what you can name your index. The complete lis
|
|||
|
||||
[float]
|
||||
[[create-index-settings]]
|
||||
=== Index Settings
|
||||
==== Index Settings
|
||||
|
||||
Each index created can have specific settings
|
||||
associated with it, defined in the body:
|
||||
|
@ -76,7 +76,7 @@ that can be set when creating an index, please check the
|
|||
|
||||
[float]
|
||||
[[mappings]]
|
||||
=== Mappings
|
||||
==== Mappings
|
||||
|
||||
The create index API allows for providing a mapping definition:
|
||||
|
||||
|
@ -102,7 +102,7 @@ include_type_name is set. For more details, please see <<removal-of-types>>.
|
|||
|
||||
[float]
|
||||
[[create-index-aliases]]
|
||||
=== Aliases
|
||||
==== Aliases
|
||||
|
||||
The create index API allows also to provide a set of <<indices-aliases,aliases>>:
|
||||
|
||||
|
@ -125,7 +125,7 @@ PUT test
|
|||
|
||||
[float]
|
||||
[[create-index-wait-for-active-shards]]
|
||||
=== Wait For Active Shards
|
||||
==== Wait For Active Shards
|
||||
|
||||
By default, index creation will only return a response to the client when the primary copies of
|
||||
each shard have been started, or the request times out. The index creation response will indicate
|
||||
|
|
|
@ -1,5 +1,5 @@
|
|||
[[indices-delete-index]]
|
||||
== Delete Index
|
||||
=== Delete Index
|
||||
|
||||
The delete index API allows to delete an existing index.
|
||||
|
||||
|
|
|
@ -1,5 +1,5 @@
|
|||
[[indices-flush]]
|
||||
== Flush
|
||||
=== Flush
|
||||
|
||||
The flush API allows to flush one or more indices through an API. The
|
||||
flush process of an index makes sure that any data that is currently only
|
||||
|
@ -18,7 +18,7 @@ POST twitter/_flush
|
|||
|
||||
[float]
|
||||
[[flush-parameters]]
|
||||
=== Request Parameters
|
||||
==== Request Parameters
|
||||
|
||||
The flush API accepts the following request parameters:
|
||||
|
||||
|
@ -33,7 +33,7 @@ should be incremented even if no uncommitted changes are present.
|
|||
|
||||
[float]
|
||||
[[flush-multi-index]]
|
||||
=== Multi Index
|
||||
==== Multi Index
|
||||
|
||||
The flush API can be applied to more than one index with a single call,
|
||||
or even on `_all` the indices.
|
||||
|
@ -48,7 +48,7 @@ POST _flush
|
|||
// TEST[s/^/PUT kimchy\nPUT elasticsearch\n/]
|
||||
|
||||
[[synced-flush-api]]
|
||||
=== Synced Flush
|
||||
==== Synced Flush
|
||||
|
||||
Elasticsearch tracks the indexing activity of each shard. Shards that have not
|
||||
received any indexing operations for 5 minutes are automatically marked as inactive. This presents
|
||||
|
@ -119,7 +119,7 @@ which returns something similar to:
|
|||
<1> the `sync id` marker
|
||||
|
||||
[float]
|
||||
=== Synced Flush API
|
||||
==== Synced Flush API
|
||||
|
||||
The Synced Flush API allows an administrator to initiate a synced flush manually. This can be particularly useful for
|
||||
a planned (rolling) cluster restart where you can stop indexing and don't want to wait the default 5 minutes for
|
||||
|
|
|
@ -1,5 +1,5 @@
|
|||
[[indices-forcemerge]]
|
||||
== Force Merge
|
||||
=== Force Merge
|
||||
|
||||
The force merge API allows to force merging of one or more indices through an
|
||||
API. The merge relates to the number of segments a Lucene index holds within
|
||||
|
@ -24,7 +24,7 @@ POST /twitter/_forcemerge
|
|||
|
||||
[float]
|
||||
[[forcemerge-parameters]]
|
||||
=== Request Parameters
|
||||
==== Request Parameters
|
||||
|
||||
The force merge API accepts the following request parameters:
|
||||
|
||||
|
@ -52,7 +52,7 @@ POST /kimchy/_forcemerge?only_expunge_deletes=false&max_num_segments=100&flush=t
|
|||
|
||||
[float]
|
||||
[[forcemerge-multi-index]]
|
||||
=== Multi Index
|
||||
==== Multi Index
|
||||
|
||||
The force merge API can be applied to more than one index with a single call, or
|
||||
even on `_all` the indices. Multi index operations are executed one shard at a
|
||||
|
|
|
@ -1,5 +1,5 @@
|
|||
[[indices-get-field-mapping]]
|
||||
== Get Field Mapping
|
||||
=== Get Field Mapping
|
||||
|
||||
The get field mapping API allows you to retrieve mapping definitions for one or more fields.
|
||||
This is useful when you do not need the complete type mapping returned by
|
||||
|
@ -59,7 +59,7 @@ For which the response is:
|
|||
// TESTRESPONSE
|
||||
|
||||
[float]
|
||||
=== Multiple Indices and Fields
|
||||
==== Multiple Indices and Fields
|
||||
|
||||
The get field mapping API can be used to get the mapping of multiple fields from more than one index
|
||||
with a single call. General usage of the API follows the
|
||||
|
@ -81,7 +81,7 @@ GET /_all/_mapping/field/*.id
|
|||
// TEST[s/^/PUT kimchy\nPUT book\n/]
|
||||
|
||||
[float]
|
||||
=== Specifying fields
|
||||
==== Specifying fields
|
||||
|
||||
The get mapping api allows you to specify a comma-separated list of fields.
|
||||
|
||||
|
@ -168,7 +168,7 @@ returns:
|
|||
// TESTRESPONSE
|
||||
|
||||
[float]
|
||||
=== Other options
|
||||
==== Other options
|
||||
|
||||
[horizontal]
|
||||
`include_defaults`::
|
||||
|
|
|
@ -1,5 +1,5 @@
|
|||
[[indices-get-index]]
|
||||
== Get Index
|
||||
=== Get Index
|
||||
|
||||
The get index API allows to retrieve information about one or more indexes.
|
||||
|
||||
|
|
|
@ -1,5 +1,5 @@
|
|||
[[indices-get-mapping]]
|
||||
== Get Mapping
|
||||
=== Get Mapping
|
||||
|
||||
The get mapping API allows to retrieve mapping definitions for an index or
|
||||
index/type.
|
||||
|
@ -16,7 +16,7 @@ in responses no longer contain a type name by default, you can still request the
|
|||
through the parameter include_type_name. For more details, please see <<removal-of-types>>.
|
||||
|
||||
[float]
|
||||
=== Multiple Indices
|
||||
==== Multiple Indices
|
||||
|
||||
The get mapping API can be used to get more than one index with a
|
||||
single call. General usage of the API follows the following syntax:
|
||||
|
|
|
@ -1,5 +1,5 @@
|
|||
[[indices-get-settings]]
|
||||
== Get Settings
|
||||
=== Get Settings
|
||||
|
||||
The get settings API allows to retrieve settings of index/indices:
|
||||
|
||||
|
@ -11,7 +11,7 @@ GET /twitter/_settings
|
|||
// TEST[setup:twitter]
|
||||
|
||||
[float]
|
||||
=== Multiple Indices and Types
|
||||
==== Multiple Indices and Types
|
||||
|
||||
The get settings API can be used to get settings for more than one index
|
||||
with a single call. General usage of the API follows the
|
||||
|
@ -33,7 +33,7 @@ GET /log_2013_*/_settings
|
|||
// TEST[s/^/PUT kimchy\nPUT log_2013_01_01\n/]
|
||||
|
||||
[float]
|
||||
=== Filtering settings by name
|
||||
==== Filtering settings by name
|
||||
|
||||
The settings that are returned can be filtered with wildcard matching
|
||||
as follows:
|
||||
|
|
|
@ -1,5 +1,5 @@
|
|||
[[indices-exists]]
|
||||
== Indices Exists
|
||||
=== Indices Exists
|
||||
|
||||
Used to check if the index (indices) exists or not. For example:
|
||||
|
||||
|
|
|
@ -1,5 +1,5 @@
|
|||
[[indices-open-close]]
|
||||
== Open / Close Index API
|
||||
=== Open / Close Index API
|
||||
|
||||
The open and close index APIs allow to close an index, and later on
|
||||
opening it.
|
||||
|
@ -80,7 +80,7 @@ Closed indices consume a significant amount of disk-space which can cause proble
|
|||
API by setting `cluster.indices.close.enable` to `false`. The default is `true`.
|
||||
|
||||
[float]
|
||||
=== Wait For Active Shards
|
||||
==== Wait For Active Shards
|
||||
|
||||
Because opening or closing an index allocates its shards, the
|
||||
<<create-index-wait-for-active-shards,`wait_for_active_shards`>> setting on
|
||||
|
|
|
@ -1,5 +1,5 @@
|
|||
[[indices-put-mapping]]
|
||||
== Put Mapping
|
||||
=== Put Mapping
|
||||
|
||||
The PUT mapping API allows you to add fields to an existing index or to change search only settings of existing fields.
|
||||
|
||||
|
@ -28,7 +28,7 @@ types in requests is now deprecated, a type can still be provided if the request
|
|||
include_type_name is set. For more details, please see <<removal-of-types>>.
|
||||
|
||||
[float]
|
||||
=== Multi-index
|
||||
==== Multi-index
|
||||
|
||||
The PUT mapping API can be applied to multiple indices with a single request.
|
||||
For example, we can update the `twitter-1` and `twitter-2` mappings at the same time:
|
||||
|
@ -55,7 +55,7 @@ PUT /twitter-1,twitter-2/_mapping <1>
|
|||
|
||||
[[updating-field-mappings]]
|
||||
[float]
|
||||
=== Updating field mappings
|
||||
==== Updating field mappings
|
||||
|
||||
In general, the mapping for existing fields cannot be updated. There are some
|
||||
exceptions to this rule. For instance:
|
||||
|
|
|
@ -1,5 +1,5 @@
|
|||
[[indices-recovery]]
|
||||
== Indices Recovery
|
||||
=== Indices Recovery
|
||||
|
||||
The indices recovery API provides insight into on-going index shard recoveries.
|
||||
Recovery status may be reported for specific indices, or cluster-wide.
|
||||
|
|
|
@ -1,5 +1,5 @@
|
|||
[[indices-refresh]]
|
||||
== Refresh
|
||||
=== Refresh
|
||||
|
||||
The refresh API allows to explicitly refresh one or more index, making
|
||||
all operations performed since the last refresh available for search.
|
||||
|
@ -15,7 +15,7 @@ POST /twitter/_refresh
|
|||
// TEST[setup:twitter]
|
||||
|
||||
[float]
|
||||
=== Multi Index
|
||||
==== Multi Index
|
||||
|
||||
The refresh API can be applied to more than one index with a single
|
||||
call, or even on `_all` the indices.
|
||||
|
|
|
@ -1,5 +1,5 @@
|
|||
[[indices-rollover-index]]
|
||||
== Rollover Index
|
||||
=== Rollover Index
|
||||
|
||||
The rollover index API rolls an <<indices-aliases, alias>> to a new index when
|
||||
the existing index meets a condition you provide. You can use this API to retire
|
||||
|
@ -88,7 +88,7 @@ The above request might return the following response:
|
|||
<3> The result of each condition.
|
||||
|
||||
[float]
|
||||
=== Naming the new index
|
||||
==== Naming the new index
|
||||
|
||||
If the name of the existing index ends with `-` and a number -- e.g.
|
||||
`logs-000001` -- then the name of the new index will follow the same pattern,
|
||||
|
@ -113,7 +113,7 @@ POST /my_alias/_rollover/my_new_index_name
|
|||
// TEST[s/^/PUT my_old_index_name\nPUT my_old_index_name\/_alias\/my_alias\n/]
|
||||
|
||||
[float]
|
||||
=== Using date math with the rollover API
|
||||
==== Using date math with the rollover API
|
||||
|
||||
It can be useful to use <<date-math-index-names,date math>> to name the
|
||||
rollover index according to the date that the index rolled over, e.g.
|
||||
|
@ -193,7 +193,7 @@ GET /%3Clogs-%7Bnow%2Fd%7D-*%3E%2C%3Clogs-%7Bnow%2Fd-1d%7D-*%3E%2C%3Clogs-%7Bnow
|
|||
// TEST[s/now/2016.10.31||/]
|
||||
|
||||
[float]
|
||||
=== Defining the new index
|
||||
==== Defining the new index
|
||||
|
||||
The settings, mappings, and aliases for the new index are taken from any
|
||||
matching <<indices-templates,index templates>>. Additionally, you can specify
|
||||
|
@ -226,7 +226,7 @@ POST /logs_write/_rollover
|
|||
// CONSOLE
|
||||
|
||||
[float]
|
||||
=== Dry run
|
||||
==== Dry run
|
||||
|
||||
The rollover API supports `dry_run` mode, where request conditions can be
|
||||
checked without performing the actual rollover:
|
||||
|
@ -252,7 +252,7 @@ POST /logs_write/_rollover?dry_run
|
|||
// CONSOLE
|
||||
|
||||
[float]
|
||||
=== Wait For Active Shards
|
||||
==== Wait For Active Shards
|
||||
|
||||
Because the rollover operation creates a new index to rollover to, the
|
||||
<<create-index-wait-for-active-shards,`wait_for_active_shards`>> setting on
|
||||
|
@ -260,7 +260,7 @@ index creation applies to the rollover action as well.
|
|||
|
||||
[[indices-rollover-is-write-index]]
|
||||
[float]
|
||||
=== Write Index Alias Behavior
|
||||
==== Write Index Alias Behavior
|
||||
|
||||
The rollover alias when rolling over a write index that has `is_write_index` explicitly set to `true` is not
|
||||
swapped during rollover actions. Since having an alias point to multiple indices is ambiguous in distinguishing
|
||||
|
|
|
@ -1,5 +1,5 @@
|
|||
[[indices-segments]]
|
||||
== Indices Segments
|
||||
=== Indices Segments
|
||||
|
||||
Provide low level segments information that a Lucene index (shard level)
|
||||
is built with. Allows to be used to provide more information on the
|
||||
|
@ -119,7 +119,7 @@ compound:: Whether the segment is stored in a compound file. When true, this
|
|||
attributes:: Contains information about whether high compression was enabled
|
||||
|
||||
[float]
|
||||
=== Verbose mode
|
||||
==== Verbose mode
|
||||
|
||||
To add additional information that can be used for debugging, use the `verbose` flag.
|
||||
|
||||
|
|
|
@ -1,5 +1,5 @@
|
|||
[[indices-shards-stores]]
|
||||
== Indices Shard Stores
|
||||
=== Indices Shard Stores
|
||||
|
||||
Provides store information for shard copies of indices.
|
||||
Store information reports on which nodes shard copies exist, the shard
|
||||
|
|
|
@ -1,5 +1,5 @@
|
|||
[[indices-shrink-index]]
|
||||
== Shrink Index
|
||||
=== Shrink Index
|
||||
|
||||
The shrink index API allows you to shrink an existing index into a new index
|
||||
with fewer primary shards. The requested number of primary shards in the target index
|
||||
|
@ -26,7 +26,7 @@ Shrinking works as follows:
|
|||
had just been re-opened.
|
||||
|
||||
[float]
|
||||
=== Preparing an index for shrinking
|
||||
==== Preparing an index for shrinking
|
||||
|
||||
In order to shrink an index, the index must be marked as read-only, and a
|
||||
(primary or replica) copy of every shard in the index must be relocated to the
|
||||
|
@ -58,7 +58,7 @@ with the <<cat-recovery,`_cat recovery` API>>, or the <<cluster-health,
|
|||
with the `wait_for_no_relocating_shards` parameter.
|
||||
|
||||
[float]
|
||||
=== Shrinking an index
|
||||
==== Shrinking an index
|
||||
|
||||
To shrink `my_source_index` into a new index called `my_target_index`, issue
|
||||
the following request:
|
||||
|
@ -134,7 +134,7 @@ POST my_source_index/_shrink/my_target_index
|
|||
NOTE: Mappings may not be specified in the `_shrink` request.
|
||||
|
||||
[float]
|
||||
=== Monitoring the shrink process
|
||||
==== Monitoring the shrink process
|
||||
|
||||
The shrink process can be monitored with the <<cat-recovery,`_cat recovery`
|
||||
API>>, or the <<cluster-health, `cluster health` API>> can be used to wait
|
||||
|
@ -153,7 +153,7 @@ become `active`. At that point, Elasticsearch will try to allocate any
|
|||
replicas and may decide to relocate the primary shard to another node.
|
||||
|
||||
[float]
|
||||
=== Wait For Active Shards
|
||||
==== Wait For Active Shards
|
||||
|
||||
Because the shrink operation creates a new index to shrink the shards to,
|
||||
the <<create-index-wait-for-active-shards,wait for active shards>> setting
|
||||
|
|
|
@ -1,5 +1,5 @@
|
|||
[[indices-split-index]]
|
||||
== Split Index
|
||||
=== Split Index
|
||||
|
||||
The split index API allows you to split an existing index into a new index,
|
||||
where each original primary shard is split into two or more primary shards in
|
||||
|
@ -32,7 +32,7 @@ properties of the default number of routing shards will then apply to the
|
|||
newly split index.
|
||||
|
||||
[float]
|
||||
=== How does splitting work?
|
||||
==== How does splitting work?
|
||||
|
||||
Splitting works as follows:
|
||||
|
||||
|
@ -51,7 +51,7 @@ Splitting works as follows:
|
|||
|
||||
[float]
|
||||
[[incremental-resharding]]
|
||||
=== Why doesn't Elasticsearch support incremental resharding?
|
||||
==== Why doesn't Elasticsearch support incremental resharding?
|
||||
|
||||
Going from `N` shards to `N+1` shards, aka. incremental resharding, is indeed a
|
||||
feature that is supported by many key-value stores. Adding a new shard and
|
||||
|
@ -81,7 +81,7 @@ old and new indices have respectively +M+ and +N+ shards, this has no overhead
|
|||
compared to searching an index that would have +M+N+ shards.
|
||||
|
||||
[float]
|
||||
=== Preparing an index for splitting
|
||||
==== Preparing an index for splitting
|
||||
|
||||
Create a new index:
|
||||
|
||||
|
@ -117,7 +117,7 @@ PUT /my_source_index/_settings
|
|||
changes like deleting the index.
|
||||
|
||||
[float]
|
||||
=== Splitting an index
|
||||
==== Splitting an index
|
||||
|
||||
To split `my_source_index` into a new index called `my_target_index`, issue
|
||||
the following request:
|
||||
|
@ -179,7 +179,7 @@ POST my_source_index/_split/my_target_index
|
|||
NOTE: Mappings may not be specified in the `_split` request.
|
||||
|
||||
[float]
|
||||
=== Monitoring the split process
|
||||
==== Monitoring the split process
|
||||
|
||||
The split process can be monitored with the <<cat-recovery,`_cat recovery`
|
||||
API>>, or the <<cluster-health, `cluster health` API>> can be used to wait
|
||||
|
@ -198,7 +198,7 @@ become `active`. At that point, Elasticsearch will try to allocate any
|
|||
replicas and may decide to relocate the primary shard to another node.
|
||||
|
||||
[float]
|
||||
=== Wait For Active Shards
|
||||
==== Wait For Active Shards
|
||||
|
||||
Because the split operation creates a new index to split the shards to,
|
||||
the <<create-index-wait-for-active-shards,wait for active shards>> setting
|
||||
|
|
|
@ -1,5 +1,5 @@
|
|||
[[indices-stats]]
|
||||
== Indices Stats
|
||||
=== Indices Stats
|
||||
|
||||
Indices level stats provide statistics on different operations happening
|
||||
on an index. The API provides statistics on the index level scope
|
||||
|
|
|
@ -1,5 +1,5 @@
|
|||
[[indices-templates]]
|
||||
== Index Templates
|
||||
=== Index Templates
|
||||
|
||||
Index templates allow you to define templates that will automatically be
|
||||
applied when new indices are created. The templates include both
|
||||
|
@ -78,7 +78,7 @@ actual index name that the template gets applied to, during index creation.
|
|||
|
||||
[float]
|
||||
[[delete]]
|
||||
=== Deleting a Template
|
||||
==== Deleting a Template
|
||||
|
||||
Index templates are identified by a name (in the above case
|
||||
`template_1`) and can be deleted as well:
|
||||
|
@ -91,7 +91,7 @@ DELETE /_template/template_1
|
|||
|
||||
[float]
|
||||
[[getting]]
|
||||
=== Getting templates
|
||||
==== Getting templates
|
||||
|
||||
Index templates are identified by a name (in the above case
|
||||
`template_1`) and can be retrieved using the following:
|
||||
|
@ -121,7 +121,7 @@ GET /_template
|
|||
|
||||
[float]
|
||||
[[indices-templates-exists]]
|
||||
=== Template exists
|
||||
==== Template exists
|
||||
|
||||
Used to check if the template exists or not. For example:
|
||||
|
||||
|
@ -141,7 +141,7 @@ the parameter include_type_name. For more details, please see <<removal-of-types
|
|||
|
||||
[float]
|
||||
[[multiple-templates]]
|
||||
=== Multiple Templates Matching
|
||||
==== Multiple Templates Matching
|
||||
|
||||
Multiple index templates can potentially match an index, in this case,
|
||||
both the settings and mappings are merged into the final configuration
|
||||
|
@ -189,7 +189,7 @@ result in a non-deterministic merging order.
|
|||
|
||||
[float]
|
||||
[[versioning-templates]]
|
||||
=== Template Versioning
|
||||
==== Template Versioning
|
||||
|
||||
Templates can optionally add a `version` number, which can be any integer value,
|
||||
in order to simplify template management by external systems. The `version`
|
||||
|
|
|
@ -1,5 +1,5 @@
|
|||
[[indices-types-exists]]
|
||||
== Types Exists
|
||||
=== Types Exists
|
||||
|
||||
deprecated[7.0.0, Types are deprecated and are in the process of being removed. See <<removal-of-types>>.]
|
||||
|
||||
|
|
|
@ -1,5 +1,5 @@
|
|||
[[indices-update-settings]]
|
||||
== Update Indices Settings
|
||||
=== Update Indices Settings
|
||||
|
||||
Change specific index level settings in real time.
|
||||
|
||||
|
@ -40,7 +40,7 @@ request parameter can be set to `true`.
|
|||
|
||||
[float]
|
||||
[[bulk]]
|
||||
=== Bulk Indexing Usage
|
||||
==== Bulk Indexing Usage
|
||||
|
||||
For example, the update settings API can be used to dynamically change
|
||||
the index from being more performant for bulk indexing, and then move it
|
||||
|
@ -88,7 +88,7 @@ POST /twitter/_forcemerge?max_num_segments=5
|
|||
|
||||
[float]
|
||||
[[update-settings-analysis]]
|
||||
=== Updating Index Analysis
|
||||
==== Updating Index Analysis
|
||||
|
||||
It is also possible to define new <<analysis,analyzers>> for the index.
|
||||
But it is required to <<indices-open-close,close>> the index
|
||||
|
|
|
@ -10,48 +10,46 @@ directly to configure and access {es} features.
|
|||
We are working on including more {es} APIs in this section. Some content might
|
||||
not be included yet.
|
||||
|
||||
* <<docs>>
|
||||
* <<search>>
|
||||
* <<indices>>
|
||||
* <<cat>>
|
||||
* <<cluster>>
|
||||
* <<info-api,Info API>>
|
||||
* <<api-conventions, API conventions>>
|
||||
* <<cat, cat APIs>>
|
||||
* <<cluster, Cluster APIs>>
|
||||
* <<ccr-apis,{ccr-cap} APIs>>
|
||||
* <<data-frame-apis,{dataframe-cap} APIs>>
|
||||
* <<docs, Document APIs>>
|
||||
* <<graph-explore-api,Graph Explore API>>
|
||||
* <<indices-reload-analyzers,Reload Search Analyzers API>>
|
||||
* <<indices, Index APIs>>
|
||||
* <<index-lifecycle-management-api,Index lifecycle management APIs>>
|
||||
* <<snapshot-lifecycle-management-api,Snapshot lifecycle management APIs>>
|
||||
* <<info-api,Info API>>
|
||||
* <<licensing-apis,Licensing APIs>>
|
||||
* <<ml-apis,{ml-cap} {anomaly-detect} APIs>>
|
||||
* <<ml-df-analytics-apis,{ml-cap} {dfanalytics} APIs>>
|
||||
* <<security-api,Security APIs>>
|
||||
* <<watcher-api,Watcher APIs>>
|
||||
* <<rollup-apis,Rollup APIs>>
|
||||
* <<migration-api,Migration APIs>>
|
||||
* <<indices-reload-analyzers,Reload Search Analyzers API>>
|
||||
* <<rollup-apis,Rollup APIs>>
|
||||
* <<search, Search APIs>>
|
||||
* <<security-api,Security APIs>>
|
||||
* <<snapshot-lifecycle-management-api,Snapshot lifecycle management APIs>>
|
||||
* <<watcher-api,Watcher APIs>>
|
||||
--
|
||||
|
||||
:leveloffset: +1
|
||||
include::{es-repo-dir}/api-conventions.asciidoc[]
|
||||
include::{es-repo-dir}/docs.asciidoc[]
|
||||
include::{es-repo-dir}/search.asciidoc[]
|
||||
include::{es-repo-dir}/indices.asciidoc[]
|
||||
include::{es-repo-dir}/cat.asciidoc[]
|
||||
include::{es-repo-dir}/cluster.asciidoc[]
|
||||
|
||||
:leveloffset: -1
|
||||
include::info.asciidoc[]
|
||||
include::{es-repo-dir}/ccr/apis/ccr-apis.asciidoc[]
|
||||
include::{es-repo-dir}/data-frames/apis/index.asciidoc[]
|
||||
include::{es-repo-dir}/docs.asciidoc[]
|
||||
include::{es-repo-dir}/graph/explore.asciidoc[]
|
||||
include::{es-repo-dir}/indices.asciidoc[]
|
||||
include::{es-repo-dir}/ilm/apis/ilm-api.asciidoc[]
|
||||
include::{es-repo-dir}/ilm/apis/slm-api.asciidoc[]
|
||||
include::info.asciidoc[]
|
||||
include::{es-repo-dir}/licensing/index.asciidoc[]
|
||||
include::{es-repo-dir}/migration/migration.asciidoc[]
|
||||
include::{es-repo-dir}/ml/anomaly-detection/apis/ml-api.asciidoc[]
|
||||
include::{es-repo-dir}/ml/df-analytics/apis/index.asciidoc[]
|
||||
include::{es-repo-dir}/rollup/rollup-api.asciidoc[]
|
||||
include::{xes-repo-dir}/rest-api/security.asciidoc[]
|
||||
include::{xes-repo-dir}/rest-api/watcher.asciidoc[]
|
||||
include::{es-repo-dir}/migration/migration.asciidoc[]
|
||||
include::{es-repo-dir}/indices/apis/reload-analyzers.asciidoc[]
|
||||
include::{es-repo-dir}/rollup/rollup-api.asciidoc[]
|
||||
include::{es-repo-dir}/search.asciidoc[]
|
||||
include::{xes-repo-dir}/rest-api/security.asciidoc[]
|
||||
include::{es-repo-dir}/ilm/apis/slm-api.asciidoc[]
|
||||
include::{xes-repo-dir}/rest-api/watcher.asciidoc[]
|
||||
include::defs.asciidoc[]
|
||||
|
|
|
@ -1,12 +1,12 @@
|
|||
[[search]]
|
||||
= Search APIs
|
||||
== Search APIs
|
||||
|
||||
Most search APIs are <<search-multi-index,multi-index>>, with the
|
||||
exception of the <<search-explain>> endpoints.
|
||||
|
||||
[float]
|
||||
[[search-routing]]
|
||||
== Routing
|
||||
=== Routing
|
||||
|
||||
When executing a search, Elasticsearch will pick the "best" copy of the data
|
||||
based on the <<search-adaptive-replica,adaptive replica selection>> formula.
|
||||
|
@ -56,7 +56,7 @@ the routing values match to.
|
|||
|
||||
[float]
|
||||
[[search-adaptive-replica]]
|
||||
== Adaptive Replica Selection
|
||||
=== Adaptive Replica Selection
|
||||
|
||||
By default, Elasticsearch will use what is called adaptive replica selection.
|
||||
This allows the coordinating node to send the request to the copy deemed "best"
|
||||
|
@ -87,7 +87,7 @@ index/indices shards in a round robin fashion between all copies of the data
|
|||
|
||||
[float]
|
||||
[[stats-groups]]
|
||||
== Stats Groups
|
||||
=== Stats Groups
|
||||
|
||||
A search can be associated with stats groups, which maintains a
|
||||
statistics aggregation per group. It can later be retrieved using the
|
||||
|
@ -110,7 +110,7 @@ POST /_search
|
|||
|
||||
[float]
|
||||
[[global-search-timeout]]
|
||||
== Global Search Timeout
|
||||
=== Global Search Timeout
|
||||
|
||||
Individual searches can have a timeout as part of the
|
||||
<<search-request-body>>. Since search requests can originate from many
|
||||
|
@ -127,7 +127,7 @@ Setting this value to `-1` resets the global search timeout to no timeout.
|
|||
|
||||
[float]
|
||||
[[global-search-cancellation]]
|
||||
== Search Cancellation
|
||||
=== Search Cancellation
|
||||
|
||||
Searches can be cancelled using standard <<task-cancellation,task cancellation>>
|
||||
mechanism. By default, a running search only checks if it is cancelled or
|
||||
|
@ -140,7 +140,7 @@ setting only affects the searches that start after the change is made.
|
|||
|
||||
[float]
|
||||
[[search-concurrency-and-parallelism]]
|
||||
== Search concurrency and parallelism
|
||||
=== Search concurrency and parallelism
|
||||
|
||||
By default Elasticsearch doesn't reject any search requests based on the number
|
||||
of shards the request hits. While Elasticsearch will optimize the search
|
||||
|
|
|
@ -1,5 +1,5 @@
|
|||
[[search-count]]
|
||||
== Count API
|
||||
=== Count API
|
||||
|
||||
The count API allows to easily execute a query and get the number of
|
||||
matches for that query. It can be executed across one or more indices.
|
||||
|
@ -49,12 +49,12 @@ The query is optional, and when not provided, it will use `match_all` to
|
|||
count all the docs.
|
||||
|
||||
[float]
|
||||
=== Multi index
|
||||
==== Multi index
|
||||
|
||||
The count API can be applied to <<search-multi-index,multiple indices>>.
|
||||
|
||||
[float]
|
||||
=== Request Parameters
|
||||
==== Request Parameters
|
||||
|
||||
When executing count using the query parameter `q`, the query passed is
|
||||
a query string using Lucene query parser. There are additional
|
||||
|
@ -85,7 +85,7 @@ Defaults to no terminate_after.
|
|||
|=======================================================================
|
||||
|
||||
[float]
|
||||
=== Request Body
|
||||
==== Request Body
|
||||
|
||||
The count can use the <<query-dsl,Query DSL>> within
|
||||
its body in order to express the query that should be executed. The body
|
||||
|
@ -95,14 +95,14 @@ Both HTTP GET and HTTP POST can be used to execute count with body.
|
|||
Since not all clients support GET with body, POST is allowed as well.
|
||||
|
||||
[float]
|
||||
=== Distributed
|
||||
==== Distributed
|
||||
|
||||
The count operation is broadcast across all shards. For each shard id
|
||||
group, a replica is chosen and executed against it. This means that
|
||||
replicas increase the scalability of count.
|
||||
|
||||
[float]
|
||||
=== Routing
|
||||
==== Routing
|
||||
|
||||
The routing value (a comma separated list of the routing values) can be
|
||||
specified to control which shards the count request will be executed on.
|
||||
|
|
|
@ -1,5 +1,5 @@
|
|||
[[search-explain]]
|
||||
== Explain API
|
||||
=== Explain API
|
||||
|
||||
The explain api computes a score explanation for a query and a specific
|
||||
document. This can give useful feedback whether a document matches or
|
||||
|
@ -8,7 +8,7 @@ didn't match a specific query.
|
|||
Note that a single index must be provided to the `index` parameter.
|
||||
|
||||
[float]
|
||||
=== Usage
|
||||
==== Usage
|
||||
|
||||
Full query example:
|
||||
|
||||
|
@ -116,7 +116,7 @@ GET /twitter/_explain/0?q=message:search
|
|||
This will yield the same result as the previous request.
|
||||
|
||||
[float]
|
||||
=== All parameters:
|
||||
==== All parameters:
|
||||
|
||||
[horizontal]
|
||||
`_source`::
|
||||
|
|
|
@ -1,5 +1,5 @@
|
|||
[[search-field-caps]]
|
||||
== Field Capabilities API
|
||||
=== Field Capabilities API
|
||||
|
||||
The field capabilities API allows to retrieve the capabilities of fields among multiple indices.
|
||||
|
||||
|
@ -27,7 +27,7 @@ Supported request options:
|
|||
will cause all fields that match the expression to be returned.
|
||||
|
||||
[float]
|
||||
=== Field Capabilities
|
||||
==== Field Capabilities
|
||||
|
||||
The field capabilities API returns the following information per field:
|
||||
|
||||
|
@ -57,7 +57,7 @@ or null if all indices have the same definition for the field.
|
|||
|
||||
|
||||
[float]
|
||||
=== Response format
|
||||
==== Response format
|
||||
|
||||
Request:
|
||||
|
||||
|
@ -105,7 +105,7 @@ and as a `keyword` in `index3` and `index4`.
|
|||
<4> The field `title` is defined as `text` in all indices.
|
||||
|
||||
[float]
|
||||
=== Unmapped fields
|
||||
==== Unmapped fields
|
||||
|
||||
By default unmapped fields are ignored. You can include them in the response by
|
||||
adding a parameter called `include_unmapped` in the request:
|
||||
|
|
|
@ -1,5 +1,5 @@
|
|||
[[search-multi-search]]
|
||||
== Multi Search API
|
||||
=== Multi Search API
|
||||
|
||||
The multi search API allows to execute several search requests within
|
||||
the same API. The endpoint for it is `_msearch`.
|
||||
|
@ -98,13 +98,13 @@ increase this value to a higher number.
|
|||
|
||||
[float]
|
||||
[[msearch-security]]
|
||||
=== Security
|
||||
==== Security
|
||||
|
||||
See <<url-access-control>>
|
||||
|
||||
[float]
|
||||
[[template-msearch]]
|
||||
=== Template support
|
||||
==== Template support
|
||||
|
||||
Much like described in <<search-template>> for the _search resource, _msearch
|
||||
also provides support for templates. Submit them like follows:
|
||||
|
@ -177,5 +177,5 @@ GET _msearch/template
|
|||
|
||||
[float]
|
||||
[[multi-search-partial-responses]]
|
||||
=== Partial responses
|
||||
==== Partial responses
|
||||
To ensure fast responses, the multi search API will respond with partial results if one or more shards fail. See <<shard-failures, Shard failures>> for more information.
|
|
@ -1,5 +1,5 @@
|
|||
[[search-profile]]
|
||||
== Profile API
|
||||
=== Profile API
|
||||
|
||||
WARNING: The Profile API is a debugging tool and adds significant overhead to search execution.
|
||||
|
||||
|
@ -14,7 +14,7 @@ The output from the Profile API is *very* verbose, especially for complicated re
|
|||
many shards. Pretty-printing the response is recommended to help understand the output
|
||||
|
||||
[float]
|
||||
=== Usage
|
||||
==== Usage
|
||||
|
||||
Any `_search` request can be profiled by adding a top-level `profile` parameter:
|
||||
|
||||
|
@ -228,7 +228,7 @@ NOTE: As with other statistics apis, the Profile API supports human readable out
|
|||
human readable timing information (e.g. `"time": "391,9ms"`, `"time": "123.3micros"`).
|
||||
|
||||
[[profiling-queries]]
|
||||
=== Profiling Queries
|
||||
==== Profiling Queries
|
||||
|
||||
[NOTE]
|
||||
=======================================
|
||||
|
@ -244,7 +244,7 @@ the `advance` phase of that query is the cause, for example.
|
|||
=======================================
|
||||
|
||||
[[query-section]]
|
||||
==== `query` Section
|
||||
===== `query` Section
|
||||
|
||||
The `query` section contains detailed timing of the query tree executed by Lucene on a particular shard.
|
||||
The overall structure of this query tree will resemble your original Elasticsearch query, but may be slightly
|
||||
|
@ -296,7 +296,7 @@ that in a moment. Finally, the `children` array lists any sub-queries that may
|
|||
values ("search test"), our BooleanQuery holds two children TermQueries. They have identical information (type, time,
|
||||
breakdown, etc). Children are allowed to have their own children.
|
||||
|
||||
===== Timing Breakdown
|
||||
====== Timing Breakdown
|
||||
|
||||
The `breakdown` component lists detailed timing statistics about low-level Lucene execution:
|
||||
|
||||
|
@ -335,7 +335,7 @@ the breakdown is inclusive of all children times.
|
|||
The meaning of the stats are as follows:
|
||||
|
||||
[float]
|
||||
==== All parameters:
|
||||
===== All parameters:
|
||||
|
||||
[horizontal]
|
||||
`create_weight`::
|
||||
|
@ -401,7 +401,7 @@ The meaning of the stats are as follows:
|
|||
how selective queries are, by comparing counts between different query components.
|
||||
|
||||
[[collectors-section]]
|
||||
==== `collectors` Section
|
||||
===== `collectors` Section
|
||||
|
||||
The Collectors portion of the response shows high-level execution details. Lucene works by defining a "Collector"
|
||||
which is responsible for coordinating the traversal, scoring, and collection of matching documents. Collectors
|
||||
|
@ -488,7 +488,7 @@ For reference, the various collector reasons are:
|
|||
|
||||
|
||||
[[rewrite-section]]
|
||||
==== `rewrite` Section
|
||||
===== `rewrite` Section
|
||||
|
||||
All queries in Lucene undergo a "rewriting" process. A query (and its sub-queries) may be rewritten one or
|
||||
more times, and the process continues until the query stops changing. This process allows Lucene to perform
|
||||
|
@ -500,7 +500,7 @@ The rewriting process is complex and difficult to display, since queries can cha
|
|||
showing the intermediate results, the total rewrite time is simply displayed as a value (in nanoseconds). This
|
||||
value is cumulative and contains the total time for all queries being rewritten.
|
||||
|
||||
==== A more complex example
|
||||
===== A more complex example
|
||||
|
||||
|
||||
To demonstrate a slightly more complex query and the associated results, we can profile the following query:
|
||||
|
@ -674,7 +674,7 @@ The Collector tree is fairly straightforward, showing how a single CancellableCo
|
|||
which also wraps a FilteredCollector to execute the post_filter (and in turn wraps the normal scoring SimpleCollector),
|
||||
a BucketCollector to run all scoped aggregations.
|
||||
|
||||
==== Understanding MultiTermQuery output
|
||||
===== Understanding MultiTermQuery output
|
||||
|
||||
A special note needs to be made about the `MultiTermQuery` class of queries. This includes wildcards, regex, and fuzzy
|
||||
queries. These queries emit very verbose responses, and are not overly structured.
|
||||
|
@ -694,10 +694,10 @@ ignore its children if you find the details too tricky to interpret.
|
|||
Hopefully this will be fixed in future iterations, but it is a tricky problem to solve and still in-progress :)
|
||||
|
||||
[[profiling-aggregations]]
|
||||
=== Profiling Aggregations
|
||||
==== Profiling Aggregations
|
||||
|
||||
[[agg-section]]
|
||||
==== `aggregations` Section
|
||||
===== `aggregations` Section
|
||||
|
||||
|
||||
The `aggregations` section contains detailed timing of the aggregation tree executed by a particular shard.
|
||||
|
@ -817,7 +817,7 @@ aggregation then has a child `LongTermsAggregator` which comes from the second t
|
|||
The `time_in_nanos` field shows the time executed by each aggregation, and is inclusive of all children. While the overall time is useful,
|
||||
the `breakdown` field will give detailed stats about how the time was spent.
|
||||
|
||||
===== Timing Breakdown
|
||||
====== Timing Breakdown
|
||||
|
||||
The `breakdown` component lists detailed timing statistics about low-level Lucene execution:
|
||||
|
||||
|
@ -845,7 +845,7 @@ the breakdown is inclusive of all children times.
|
|||
The meaning of the stats are as follows:
|
||||
|
||||
[float]
|
||||
==== All parameters:
|
||||
===== All parameters:
|
||||
|
||||
[horizontal]
|
||||
`initialise`::
|
||||
|
@ -869,9 +869,9 @@ The meaning of the stats are as follows:
|
|||
means the `collect()` method was called on two different documents.
|
||||
|
||||
[[profiling-considerations]]
|
||||
=== Profiling Considerations
|
||||
==== Profiling Considerations
|
||||
|
||||
==== Performance Notes
|
||||
===== Performance Notes
|
||||
|
||||
Like any profiler, the Profile API introduces a non-negligible overhead to search execution. The act of instrumenting
|
||||
low-level method calls such as `collect`, `advance`, and `next_doc` can be fairly expensive, since these methods are called
|
||||
|
@ -883,7 +883,7 @@ could cause some queries to report larger relative times than their non-profiled
|
|||
not have a drastic effect compared to other components in the profiled query.
|
||||
|
||||
[[profile-limitations]]
|
||||
==== Limitations
|
||||
===== Limitations
|
||||
|
||||
- Profiling currently does not measure the search fetch phase nor the network overhead
|
||||
- Profiling also does not account for time spent in the queue, merging shard responses on the coordinating node, or
|
||||
|
|
|
@ -1,5 +1,5 @@
|
|||
[[search-rank-eval]]
|
||||
== Ranking Evaluation API
|
||||
=== Ranking Evaluation API
|
||||
|
||||
experimental["The ranking evaluation API is experimental and may be changed or removed completely in a future release, as well as change in non-backwards compatible ways on minor versions updates. Elastic will take a best effort approach to fix any issues, but experimental features are not subject to the support SLA of official GA features."]
|
||||
|
||||
|
@ -10,7 +10,7 @@ returns typical information retrieval metrics like _mean reciprocal rank_,
|
|||
_precision_ or _discounted cumulative gain_.
|
||||
|
||||
[float]
|
||||
=== Overview
|
||||
==== Overview
|
||||
|
||||
Search quality evaluation starts with looking at the users of your search application, and the things that they are searching for.
|
||||
Users have a specific _information need_, e.g. they are looking for gift in a web shop or want to book a flight for their next holiday.
|
||||
|
@ -31,7 +31,7 @@ In order to get started with search quality evaluation, three basic things are n
|
|||
The ranking evaluation API provides a convenient way to use this information in a ranking evaluation request to calculate different search evaluation metrics. This gives a first estimation of your overall search quality and give you a measurement to optimize against when fine-tuning various aspect of the query generation in your application.
|
||||
|
||||
[float]
|
||||
=== Ranking evaluation request structure
|
||||
==== Ranking evaluation request structure
|
||||
|
||||
In its most basic form, a request to the `_rank_eval` endpoint has two sections:
|
||||
|
||||
|
@ -88,7 +88,7 @@ the rating of the documents relevance with regards to this search request
|
|||
A document `rating` can be any integer value that expresses the relevance of the document on a user defined scale. For some of the metrics, just giving a binary rating (e.g. `0` for irrelevant and `1` for relevant) will be sufficient, other metrics can use a more fine grained scale.
|
||||
|
||||
[float]
|
||||
=== Template based ranking evaluation
|
||||
==== Template based ranking evaluation
|
||||
|
||||
As an alternative to having to provide a single query per test request, it is possible to specify query templates in the evaluation request and later refer to them. Queries with similar structure that only differ in their parameters don't have to be repeated all the time in the `requests` section this way. In typical search systems where user inputs usually get filled into a small set of query templates, this helps making the evaluation request more succinct.
|
||||
|
||||
|
@ -130,14 +130,14 @@ GET /my_index/_rank_eval
|
|||
<4> the parameters to use to fill the template
|
||||
|
||||
[float]
|
||||
=== Available evaluation metrics
|
||||
==== Available evaluation metrics
|
||||
|
||||
The `metric` section determines which of the available evaluation metrics is going to be used.
|
||||
Currently, the following metrics are supported:
|
||||
|
||||
[float]
|
||||
[[k-precision]]
|
||||
==== Precision at K (P@k)
|
||||
===== Precision at K (P@k)
|
||||
|
||||
This metric measures the number of relevant results in the top k search results. Its a form of the well known https://en.wikipedia.org/wiki/Information_retrieval#Precision[Precision] metric that only looks at the top k documents. It is the fraction of relevant documents in those first k
|
||||
search. A precision at 10 (P@10) value of 0.6 then means six out of the 10 top hits are relevant with respect to the users information need.
|
||||
|
@ -183,7 +183,7 @@ If set to 'true', unlabeled documents are ignored and neither count as relevant
|
|||
|=======================================================================
|
||||
|
||||
[float]
|
||||
==== Mean reciprocal rank
|
||||
===== Mean reciprocal rank
|
||||
|
||||
For every query in the test suite, this metric calculates the reciprocal of the rank of the
|
||||
first relevant document. For example finding the first relevant result
|
||||
|
@ -223,7 +223,7 @@ in the query. Defaults to 10.
|
|||
|=======================================================================
|
||||
|
||||
[float]
|
||||
==== Discounted cumulative gain (DCG)
|
||||
===== Discounted cumulative gain (DCG)
|
||||
|
||||
In contrast to the two metrics above, https://en.wikipedia.org/wiki/Discounted_cumulative_gain[discounted cumulative gain] takes both, the rank and the rating of the search results, into account.
|
||||
|
||||
|
@ -261,7 +261,7 @@ in the query. Defaults to 10.
|
|||
|=======================================================================
|
||||
|
||||
[float]
|
||||
==== Expected Reciprocal Rank (ERR)
|
||||
===== Expected Reciprocal Rank (ERR)
|
||||
|
||||
Expected Reciprocal Rank (ERR) is an extension of the classical reciprocal rank for the graded relevance case
|
||||
(Olivier Chapelle, Donald Metzler, Ya Zhang, and Pierre Grinspan. 2009. http://olivier.chapelle.cc/pub/err.pdf[Expected reciprocal rank for graded relevance].)
|
||||
|
@ -311,7 +311,7 @@ in the query. Defaults to 10.
|
|||
|=======================================================================
|
||||
|
||||
[float]
|
||||
=== Response format
|
||||
==== Response format
|
||||
|
||||
The response of the `_rank_eval` endpoint contains the overall calculated result for the defined quality metric,
|
||||
a `details` section with a breakdown of results for each query in the test suite and an optional `failures` section
|
||||
|
|
|
@ -1,5 +1,5 @@
|
|||
[[search-request-body]]
|
||||
== Request Body Search
|
||||
=== Request Body Search
|
||||
|
||||
The search request can be executed with a search DSL, which includes the
|
||||
<<query-dsl,Query DSL>>, within its body. Here is an
|
||||
|
@ -56,7 +56,7 @@ And here is a sample response:
|
|||
// TESTRESPONSE[s/"took": 1/"took": $body.took/]
|
||||
|
||||
[float]
|
||||
=== Parameters
|
||||
==== Parameters
|
||||
|
||||
[horizontal]
|
||||
`timeout`::
|
||||
|
@ -129,7 +129,7 @@ Both HTTP GET and HTTP POST can be used to execute search with body. Since not
|
|||
all clients support GET with body, POST is allowed as well.
|
||||
|
||||
[float]
|
||||
=== Fast check for any matching docs
|
||||
==== Fast check for any matching docs
|
||||
|
||||
NOTE: `terminate_after` is always applied **after** the `post_filter` and stops
|
||||
the query as well as the aggregation executions when enough hits have been
|
||||
|
|
|
@ -1,5 +1,5 @@
|
|||
[[request-body-search-collapse]]
|
||||
=== Field Collapsing
|
||||
==== Field Collapsing
|
||||
|
||||
Allows to collapse search results based on field values.
|
||||
The collapsing is done by selecting only the top sorted document per collapse key.
|
||||
|
@ -35,7 +35,7 @@ The field used for collapsing must be a single valued <<keyword, `keyword`>> or
|
|||
NOTE: The collapsing is applied to the top hits only and does not affect aggregations.
|
||||
|
||||
|
||||
==== Expand collapse results
|
||||
===== Expand collapse results
|
||||
|
||||
It is also possible to expand each collapsed top hits with the `inner_hits` option.
|
||||
|
||||
|
@ -117,7 +117,7 @@ The default is based on the number of data nodes and the default search thread p
|
|||
WARNING: `collapse` cannot be used in conjunction with <<request-body-search-scroll, scroll>>,
|
||||
<<request-body-search-rescore, rescore>> or <<request-body-search-search-after, search after>>.
|
||||
|
||||
==== Second level of collapsing
|
||||
===== Second level of collapsing
|
||||
|
||||
Second level of collapsing is also supported and is applied to `inner_hits`.
|
||||
For example, the following request finds the top scored tweets for
|
||||
|
|
|
@ -1,5 +1,5 @@
|
|||
[[request-body-search-docvalue-fields]]
|
||||
=== Doc value Fields
|
||||
==== Doc value Fields
|
||||
|
||||
Allows to return the <<doc-values,doc value>> representation of a field for each hit, for
|
||||
example:
|
||||
|
@ -55,7 +55,7 @@ Note that if the fields parameter specifies fields without docvalues it will try
|
|||
causing the terms for that field to be loaded to memory (cached), which will result in more memory consumption.
|
||||
|
||||
[float]
|
||||
==== Custom formats
|
||||
====== Custom formats
|
||||
|
||||
While most fields do not support custom formats, some of them do:
|
||||
|
||||
|
|
|
@ -1,5 +1,5 @@
|
|||
[[request-body-search-explain]]
|
||||
=== Explain
|
||||
==== Explain
|
||||
|
||||
Enables explanation for each hit on how its score was computed.
|
||||
|
||||
|
|
|
@ -1,5 +1,5 @@
|
|||
[[request-body-search-from-size]]
|
||||
=== From / Size
|
||||
==== From / Size
|
||||
|
||||
Pagination of results can be done by using the `from` and `size`
|
||||
parameters. The `from` parameter defines the offset from the first
|
||||
|
|
|
@ -1,5 +1,5 @@
|
|||
[[request-body-search-highlighting]]
|
||||
=== Highlighting
|
||||
==== Highlighting
|
||||
|
||||
Highlighters enable you to get highlighted snippets from one or more fields
|
||||
in your search results so you can show users where the query matches are.
|
||||
|
@ -42,7 +42,7 @@ highlighter). You can specify the highlighter `type` you want to use
|
|||
for each field.
|
||||
|
||||
[[unified-highlighter]]
|
||||
==== Unified highlighter
|
||||
===== Unified highlighter
|
||||
The `unified` highlighter uses the Lucene Unified Highlighter. This
|
||||
highlighter breaks the text into sentences and uses the BM25 algorithm to score
|
||||
individual sentences as if they were documents in the corpus. It also supports
|
||||
|
@ -50,7 +50,7 @@ accurate phrase and multi-term (fuzzy, prefix, regex) highlighting. This is the
|
|||
default highlighter.
|
||||
|
||||
[[plain-highlighter]]
|
||||
==== Plain highlighter
|
||||
===== Plain highlighter
|
||||
The `plain` highlighter uses the standard Lucene highlighter. It attempts to
|
||||
reflect the query matching logic in terms of understanding word importance and
|
||||
any word positioning criteria in phrase queries.
|
||||
|
@ -65,7 +65,7 @@ If you want to highlight a lot of fields in a lot of documents with complex
|
|||
queries, we recommend using the `unified` highlighter on `postings` or `term_vector` fields.
|
||||
|
||||
[[fast-vector-highlighter]]
|
||||
==== Fast vector highlighter
|
||||
===== Fast vector highlighter
|
||||
The `fvh` highlighter uses the Lucene Fast Vector highlighter.
|
||||
This highlighter can be used on fields with `term_vector` set to
|
||||
`with_positions_offsets` in the mapping. The fast vector highlighter:
|
||||
|
@ -84,7 +84,7 @@ The `fvh` highlighter does not support span queries. If you need support for
|
|||
span queries, try an alternative highlighter, such as the `unified` highlighter.
|
||||
|
||||
[[offsets-strategy]]
|
||||
==== Offsets Strategy
|
||||
===== Offsets Strategy
|
||||
To create meaningful search snippets from the terms being queried,
|
||||
the highlighter needs to know the start and end character offsets of each word
|
||||
in the original text. These offsets can be obtained from:
|
||||
|
@ -117,7 +117,7 @@ limited to 1000000. This default limit can be changed
|
|||
for a particular index with the index setting `index.highlight.max_analyzed_offset`.
|
||||
|
||||
[[highlighting-settings]]
|
||||
==== Highlighting Settings
|
||||
===== Highlighting Settings
|
||||
|
||||
Highlighting settings can be set on a global level and overridden at
|
||||
the field level.
|
||||
|
@ -255,7 +255,7 @@ type:: The highlighter to use: `unified`, `plain`, or `fvh`. Defaults to
|
|||
`unified`.
|
||||
|
||||
[[highlighting-examples]]
|
||||
==== Highlighting Examples
|
||||
===== Highlighting Examples
|
||||
|
||||
* <<override-global-settings, Override global settings>>
|
||||
* <<specify-highlight-query, Specify a highlight query>>
|
||||
|
@ -271,7 +271,7 @@ type:: The highlighter to use: `unified`, `plain`, or `fvh`. Defaults to
|
|||
|
||||
[[override-global-settings]]
|
||||
[float]
|
||||
=== Override global settings
|
||||
==== Override global settings
|
||||
|
||||
You can specify highlighter settings globally and selectively override them for
|
||||
individual fields.
|
||||
|
@ -300,7 +300,7 @@ GET /_search
|
|||
|
||||
[float]
|
||||
[[specify-highlight-query]]
|
||||
=== Specify a highlight query
|
||||
==== Specify a highlight query
|
||||
|
||||
You can specify a `highlight_query` to take additional information into account
|
||||
when highlighting. For example, the following query includes both the search
|
||||
|
@ -370,7 +370,7 @@ GET /_search
|
|||
|
||||
[float]
|
||||
[[set-highlighter-type]]
|
||||
=== Set highlighter type
|
||||
==== Set highlighter type
|
||||
|
||||
The `type` field allows to force a specific highlighter type.
|
||||
The allowed values are: `unified`, `plain` and `fvh`.
|
||||
|
@ -395,7 +395,7 @@ GET /_search
|
|||
|
||||
[[configure-tags]]
|
||||
[float]
|
||||
=== Configure highlighting tags
|
||||
==== Configure highlighting tags
|
||||
|
||||
By default, the highlighting will wrap highlighted text in `<em>` and
|
||||
`</em>`. This can be controlled by setting `pre_tags` and `post_tags`,
|
||||
|
@ -464,7 +464,7 @@ GET /_search
|
|||
|
||||
[float]
|
||||
[[highlight-source]]
|
||||
=== Highlight on source
|
||||
==== Highlight on source
|
||||
|
||||
Forces the highlighting to highlight fields based on the source even if fields
|
||||
are stored separately. Defaults to `false`.
|
||||
|
@ -489,7 +489,7 @@ GET /_search
|
|||
|
||||
[[highlight-all]]
|
||||
[float]
|
||||
=== Highlight in all fields
|
||||
==== Highlight in all fields
|
||||
|
||||
By default, only fields that contains a query match are highlighted. Set
|
||||
`require_field_match` to `false` to highlight all fields.
|
||||
|
@ -514,7 +514,7 @@ GET /_search
|
|||
|
||||
[[matched-fields]]
|
||||
[float]
|
||||
=== Combine matches on multiple fields
|
||||
==== Combine matches on multiple fields
|
||||
|
||||
WARNING: This is only supported by the `fvh` highlighter
|
||||
|
||||
|
@ -651,7 +651,7 @@ to
|
|||
|
||||
[[explicit-field-order]]
|
||||
[float]
|
||||
=== Explicitly order highlighted fields
|
||||
==== Explicitly order highlighted fields
|
||||
Elasticsearch highlights the fields in the order that they are sent, but per the
|
||||
JSON spec, objects are unordered. If you need to be explicit about the order
|
||||
in which fields are highlighted specify the `fields` as an array:
|
||||
|
@ -679,7 +679,7 @@ fields are highlighted but a plugin might.
|
|||
|
||||
[float]
|
||||
[[control-highlighted-frags]]
|
||||
=== Control highlighted fragments
|
||||
==== Control highlighted fragments
|
||||
|
||||
Each field highlighted can control the size of the highlighted fragment
|
||||
in characters (defaults to `100`), and the maximum number of fragments
|
||||
|
@ -780,7 +780,7 @@ GET /_search
|
|||
|
||||
[float]
|
||||
[[highlight-postings-list]]
|
||||
=== Highlight using the postings list
|
||||
==== Highlight using the postings list
|
||||
|
||||
Here is an example of setting the `comment` field in the index mapping to
|
||||
allow for highlighting using the postings:
|
||||
|
@ -822,7 +822,7 @@ PUT /example
|
|||
|
||||
[float]
|
||||
[[specify-fragmenter]]
|
||||
=== Specify a fragmenter for the plain highlighter
|
||||
==== Specify a fragmenter for the plain highlighter
|
||||
|
||||
When using the `plain` highlighter, you can choose between the `simple` and
|
||||
`span` fragmenters:
|
||||
|
|
|
@ -1,5 +1,5 @@
|
|||
[[request-body-search-index-boost]]
|
||||
=== Index Boost
|
||||
==== Index Boost
|
||||
|
||||
Allows to configure different boost level per index when searching
|
||||
across more than one indices. This is very handy when hits coming from
|
||||
|
|
|
@ -1,5 +1,5 @@
|
|||
[[request-body-search-inner-hits]]
|
||||
=== Inner hits
|
||||
==== Inner hits
|
||||
|
||||
The <<parent-join, parent-join>> and <<nested, nested>> features allow the return of documents that
|
||||
have matches in a different scope. In the parent/child case, parent documents are returned based on matches in child
|
||||
|
@ -55,7 +55,7 @@ If `inner_hits` is defined on a query that supports it then each search hit will
|
|||
--------------------------------------------------
|
||||
// NOTCONSOLE
|
||||
|
||||
==== Options
|
||||
===== Options
|
||||
|
||||
Inner hits support the following options:
|
||||
|
||||
|
@ -79,7 +79,7 @@ Inner hits also supports the following per document features:
|
|||
* <<request-body-search-seq-no-primary-term,Include Sequence Numbers and Primary Terms>>
|
||||
|
||||
[[nested-inner-hits]]
|
||||
==== Nested inner hits
|
||||
===== Nested inner hits
|
||||
|
||||
The nested `inner_hits` can be used to include nested inner objects as inner hits to a search hit.
|
||||
|
||||
|
@ -198,7 +198,7 @@ So in the above example only the comment part is returned per nested hit and not
|
|||
document that contained the comment.
|
||||
|
||||
[[nested-inner-hits-source]]
|
||||
==== Nested inner hits and +_source+
|
||||
===== Nested inner hits and +_source+
|
||||
|
||||
Nested document don't have a `_source` field, because the entire source of document is stored with the root document under
|
||||
its `_source` field. To include the source of just the nested document, the source of the root document is parsed and just
|
||||
|
@ -315,7 +315,7 @@ Response not included in text but tested for completeness sake.
|
|||
////
|
||||
|
||||
[[hierarchical-nested-inner-hits]]
|
||||
==== Hierarchical levels of nested object fields and inner hits.
|
||||
===== Hierarchical levels of nested object fields and inner hits.
|
||||
|
||||
If a mapping has multiple levels of hierarchical nested object fields each level can be accessed via dot notated path.
|
||||
For example if there is a `comments` nested field that contains a `votes` nested field and votes should directly be returned
|
||||
|
@ -437,7 +437,7 @@ Which would look like:
|
|||
This indirect referencing is only supported for nested inner hits.
|
||||
|
||||
[[parent-child-inner-hits]]
|
||||
==== Parent/child inner hits
|
||||
===== Parent/child inner hits
|
||||
|
||||
The parent/child `inner_hits` can be used to include parent or child:
|
||||
|
||||
|
|
|
@ -1,5 +1,5 @@
|
|||
[[request-body-search-min-score]]
|
||||
=== min_score
|
||||
==== min_score
|
||||
|
||||
Exclude documents which have a `_score` less than the minimum specified
|
||||
in `min_score`:
|
||||
|
|
|
@ -1,5 +1,5 @@
|
|||
[[request-body-search-queries-and-filters]]
|
||||
=== Named Queries
|
||||
==== Named Queries
|
||||
|
||||
Each filter and query can accept a `_name` in its top level definition.
|
||||
|
||||
|
|
|
@ -1,5 +1,5 @@
|
|||
[[request-body-search-post-filter]]
|
||||
=== Post filter
|
||||
==== Post filter
|
||||
|
||||
The `post_filter` is applied to the search `hits` at the very end of a search
|
||||
request, after aggregations have already been calculated. Its purpose is
|
||||
|
|
|
@ -1,5 +1,5 @@
|
|||
[[request-body-search-preference]]
|
||||
=== Preference
|
||||
==== Preference
|
||||
|
||||
Controls a `preference` of the shard copies on which to execute the search. By
|
||||
default, Elasticsearch selects from the available shard copies in an
|
||||
|
|
|
@ -1,5 +1,5 @@
|
|||
[[request-body-search-query]]
|
||||
=== Query
|
||||
==== Query
|
||||
|
||||
The query element within the search request body allows to define a
|
||||
query using the <<query-dsl,Query DSL>>.
|
||||
|
|
|
@ -1,5 +1,5 @@
|
|||
[[request-body-search-rescore]]
|
||||
=== Rescoring
|
||||
==== Rescoring
|
||||
|
||||
Rescoring can help to improve precision by reordering just the top (eg
|
||||
100 - 500) documents returned by the
|
||||
|
@ -23,7 +23,7 @@ NOTE: when exposing pagination to your users, you should not change
|
|||
`from` values) since that can alter the top hits causing results to
|
||||
confusingly shift as the user steps through pages.
|
||||
|
||||
==== Query rescorer
|
||||
===== Query rescorer
|
||||
|
||||
The query rescorer executes a second query only on the Top-K results
|
||||
returned by the <<request-body-search-query,`query`>> and
|
||||
|
@ -83,7 +83,7 @@ for <<query-dsl-function-score-query,`function query`>> rescores.
|
|||
|`min` |Take the min of the original score and the rescore query score.
|
||||
|=======================================================================
|
||||
|
||||
==== Multiple Rescores
|
||||
===== Multiple Rescores
|
||||
|
||||
It is also possible to execute multiple rescores in sequence:
|
||||
|
||||
|
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue