[DOCS] Swap [float] for [discrete] (#60134)

Changes instances of `[float]` in our docs for `[discrete]`.

Asciidoctor prefers the `[discrete]` tag for floating headings:
https://asciidoctor.org/docs/asciidoc-asciidoctor-diffs/#blocks
This commit is contained in:
James Rodewig 2020-07-23 12:42:33 -04:00 committed by GitHub
parent 716a3d5a21
commit 988e8c8fc6
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
283 changed files with 1371 additions and 1371 deletions

View File

@ -8,7 +8,7 @@ Using an <<java-admin-indices,`IndicesAdminClient`>>, you can create an index wi
client.admin().indices().prepareCreate("twitter").get();
--------------------------------------------------
[float]
[discrete]
[[java-admin-indices-create-index-settings]]
===== Index Settings

View File

@ -111,7 +111,7 @@ specifying a `pipeline` like this:
include-tagged::{client-reindex-tests}/ReindexDocumentationIT.java[update-by-query-pipeline]
--------------------------------------------------
[float]
[discrete]
[[java-docs-update-by-query-task-api]]
=== Works with the Task API
@ -130,7 +130,7 @@ With the `TaskId` shown above you can look up the task directly:
include-tagged::{client-reindex-tests}/ReindexDocumentationIT.java[update-by-query-get-task]
--------------------------------------------------
[float]
[discrete]
[[java-docs-update-by-query-cancel-task-api]]
=== Works with the Cancel Task API
@ -146,7 +146,7 @@ Use the `list tasks` API to find the value of `taskId`.
Cancelling a request is typically a very fast process but can take up to a few seconds.
The task status API continues to list the task until the cancellation is complete.
[float]
[discrete]
[[java-docs-update-by-query-rethrottle]]
=== Rethrottling

View File

@ -31,7 +31,7 @@ PUT hockey/_bulk?refresh
----------------------------------------------------------------
// TESTSETUP
[float]
[discrete]
==== Accessing Doc Values from Painless
Document values can be accessed from a `Map` named `doc`.
@ -111,7 +111,7 @@ GET hockey/_search
----------------------------------------------------------------
[float]
[discrete]
==== Missing values
`doc['field'].value` throws an exception if
@ -121,7 +121,7 @@ To check if a document is missing a value, you can call
`doc['field'].size() == 0`.
[float]
[discrete]
==== Updating Fields with Painless
You can also easily update fields. You access the original source for a field as `ctx._source.<field-name>`.
@ -177,7 +177,7 @@ POST hockey/_update/1
}
----------------------------------------------------------------
[float]
[discrete]
[[modules-scripting-painless-dates]]
==== Dates
@ -202,7 +202,7 @@ GET hockey/_search
}
----------------------------------------------------------------
[float]
[discrete]
[[modules-scripting-painless-regex]]
==== Regular expressions

View File

@ -3,7 +3,7 @@
Alerting plugins allow Elasticsearch to monitor indices and to trigger alerts when thresholds are breached.
[float]
[discrete]
=== Core alerting plugins
The core alerting plugins are:

View File

@ -73,7 +73,7 @@ is often beneficial to use separate fields for analysis with and without phoneti
That way searches can be run against both fields with differing boosts and trade-offs (e.g.
only run a fuzzy `match` query on the original text field, but not on the phonetic version).
[float]
[discrete]
===== Double metaphone settings
If the `double_metaphone` encoder is used, then this additional setting is
@ -83,7 +83,7 @@ supported:
The maximum length of the emitted metaphone token. Defaults to `4`.
[float]
[discrete]
===== Beider Morse settings
If the `beider_morse` encoder is used, then these additional settings are

View File

@ -14,7 +14,7 @@ include::install_remove.asciidoc[]
[[analysis-smartcn-tokenizer]]
[float]
[discrete]
==== `smartcn` tokenizer and token filter
The plugin provides the `smartcn` analyzer, `smartcn_tokenizer` tokenizer, and

View File

@ -11,7 +11,7 @@ http://www.egothor.org/[Egothor project].
include::install_remove.asciidoc[]
[[analysis-stempel-tokenizer]]
[float]
[discrete]
==== `stempel` tokenizer and token filters
The plugin provides the `polish` analyzer and the `polish_stem` and `polish_stop` token filters,

View File

@ -9,7 +9,7 @@ It provides stemming for Ukrainian using the http://github.com/morfologik/morfol
include::install_remove.asciidoc[]
[[analysis-ukrainian-analyzer]]
[float]
[discrete]
==== `ukrainian` analyzer
The plugin provides the `ukrainian` analyzer.

View File

@ -4,7 +4,7 @@
Analysis plugins extend Elasticsearch by adding new analyzers, tokenizers,
token filters, or character filters to Elasticsearch.
[float]
[discrete]
==== Core analysis plugins
The core analysis plugins are:
@ -44,7 +44,7 @@ Provides high quality stemming for Polish.
Provides stemming for Ukrainian.
[float]
[discrete]
==== Community contributed analysis plugins
A number of analysis plugins have been contributed by our community:

View File

@ -3,7 +3,7 @@
API extension plugins add new functionality to Elasticsearch by adding new APIs or features, usually to do with search or mapping.
[float]
[discrete]
=== Community contributed API extension plugins
A number of plugins have been contributed by our community:

View File

@ -18,7 +18,7 @@ These examples provide the bare bones needed to get started. For more
information about how to write a plugin, we recommend looking at the plugins
listed in this documentation for inspiration.
[float]
[discrete]
=== Plugin descriptor file
All plugins must contain a file called `plugin-descriptor.properties`.
@ -32,7 +32,7 @@ include::{plugin-properties-files}/plugin-descriptor.properties[]
Either fill in this template yourself or, if you are using Elasticsearch's Gradle build system, you
can fill in the necessary values in the `build.gradle` file for your plugin.
[float]
[discrete]
==== Mandatory elements for plugins
@ -70,7 +70,7 @@ in the presence of plugins with the incorrect `elasticsearch.version`.
==============================================
[float]
[discrete]
=== Testing your plugin
When testing a Java plugin, it will only be auto-loaded if it is in the
@ -81,7 +81,7 @@ You may also load your plugin within the test framework for integration tests.
Read more in {ref}/integration-tests.html#changing-node-configuration[Changing Node Configuration].
[float]
[discrete]
[[plugin-authors-jsm]]
=== Java Security permissions

View File

@ -5,7 +5,7 @@ Discovery plugins extend Elasticsearch by adding new seed hosts providers that
can be used to extend the {ref}/modules-discovery.html[cluster formation
module].
[float]
[discrete]
==== Core discovery plugins
The core discovery plugins are:
@ -25,7 +25,7 @@ addresses of seed hosts.
The Google Compute Engine discovery plugin uses the GCE API to identify the
addresses of seed hosts.
[float]
[discrete]
==== Community contributed discovery plugins
The following discovery plugins have been contributed by our community:

View File

@ -3,7 +3,7 @@
The ingest plugins extend Elasticsearch by providing additional ingest node capabilities.
[float]
[discrete]
=== Core Ingest Plugins
The core ingest plugins are:
@ -29,7 +29,7 @@ A processor that extracts details from the User-Agent header value. The
distributed by default with Elasticsearch. See
{ref}/user-agent-processor.html[User Agent processor] for more details.
[float]
[discrete]
=== Community contributed ingest plugins
The following plugin has been contributed by our community:

View File

@ -1,4 +1,4 @@
[float]
[discrete]
[id="{plugin_name}-install"]
==== Installation
@ -25,7 +25,7 @@ This plugin can be downloaded for <<plugin-management-custom-url,offline install
endif::[]
[float]
[discrete]
[id="{plugin_name}-remove"]
==== Removal

View File

@ -4,11 +4,11 @@
Integrations are not plugins, but are external tools or modules that make it easier to work with Elasticsearch.
[float]
[discrete]
[[cms-integrations]]
=== CMS integrations
[float]
[discrete]
==== Supported by the community:
* http://drupal.org/project/search_api_elasticsearch[Drupal]:
@ -31,14 +31,14 @@ Integrations are not plugins, but are external tools or modules that make it eas
* http://extensions.xwiki.org/xwiki/bin/view/Extension/Elastic+Search+Macro/[XWiki Next Generation Wiki]:
XWiki has an Elasticsearch and Kibana macro allowing to run Elasticsearch queries and display the results in XWiki pages using XWiki's scripting language as well as include Kibana Widgets in XWiki pages
[float]
[discrete]
[[data-integrations]]
=== Data import/export and validation
NOTE: Rivers were used to import data from external systems into Elasticsearch prior to the 2.0 release. Elasticsearch
releases 2.0 and later do not support rivers.
[float]
[discrete]
==== Supported by Elasticsearch:
* {logstash-ref}/plugins-outputs-elasticsearch.html[Logstash output to Elasticsearch]:
@ -50,7 +50,7 @@ releases 2.0 and later do not support rivers.
* {logstash-ref}/plugins-codecs-es_bulk.html[Elasticsearch bulk codec]
The Logstash `es_bulk` plugin decodes the Elasticsearch bulk format into individual events.
[float]
[discrete]
==== Supported by the community:
* https://github.com/jprante/elasticsearch-jdbc[JDBC importer]:
@ -75,11 +75,11 @@ releases 2.0 and later do not support rivers.
* https://github.com/senacor/elasticsearch-evolution[Elasticsearch Evolution]:
A library to migrate elasticsearch mappings.
[float]
[discrete]
[[deployment]]
=== Deployment
[float]
[discrete]
==== Supported by Elasticsearch:
* https://github.com/elastic/ansible-elasticsearch[Ansible playbook for Elasticsearch]:
@ -88,17 +88,17 @@ releases 2.0 and later do not support rivers.
* https://github.com/elastic/puppet-elasticsearch[Puppet]:
Elasticsearch puppet module.
[float]
[discrete]
==== Supported by the community:
* https://github.com/elastic/cookbook-elasticsearch[Chef]:
Chef cookbook for Elasticsearch
[float]
[discrete]
[[framework-integrations]]
=== Framework integrations
[float]
[discrete]
==== Supported by the community:
* http://www.searchtechnologies.com/aspire-for-elasticsearch[Aspire for Elasticsearch]:
@ -157,29 +157,29 @@ releases 2.0 and later do not support rivers.
* https://micrometer.io[Micrometer]:
Vendor-neutral application metrics facade. Think SLF4j, but for metrics.
[float]
[discrete]
[[hadoop-integrations]]
=== Hadoop integrations
[float]
[discrete]
==== Supported by Elasticsearch:
* link:/guide/en/elasticsearch/hadoop/current/[es-hadoop]: Elasticsearch real-time
search and analytics natively integrated with Hadoop. Supports Map/Reduce,
Cascading, Apache Hive, Apache Pig, Apache Spark and Apache Storm.
[float]
[discrete]
==== Supported by the community:
* https://github.com/criteo/garmadon[Garmadon]:
Garmadon is a solution for Hadoop Cluster realtime introspection.
[float]
[discrete]
[[monitoring-integrations]]
=== Health and Performance Monitoring
[float]
[discrete]
==== Supported by the community:
* https://github.com/radu-gheorghe/check-es[check-es]:
@ -193,10 +193,10 @@ releases 2.0 and later do not support rivers.
and receive events information.
[[other-integrations]]
[float]
[discrete]
=== Other integrations
[float]
[discrete]
==== Supported by the community:
* https://www.wireshark.org/[Wireshark]:

View File

@ -3,7 +3,7 @@
Management plugins offer UIs for managing and interacting with Elasticsearch.
[float]
[discrete]
=== Core management plugins
The core management plugins are:

View File

@ -3,7 +3,7 @@
Mapper plugins allow new field data types to be added to Elasticsearch.
[float]
[discrete]
=== Core mapper plugins
The core mapper plugins are:

View File

@ -33,7 +33,7 @@ The documentation for each plugin usually includes specific installation
instructions for that plugin, but below we document the various available
options:
[float]
[discrete]
=== Core Elasticsearch plugins
Core Elasticsearch plugins can be installed as follows:
@ -149,7 +149,7 @@ For safety reasons, a node will not start if it is missing a mandatory plugin.
[[listing-removing-updating]]
=== Listing, Removing and Updating Installed Plugins
[float]
[discrete]
=== Listing plugins
A list of the currently loaded plugins can be retrieved with the `list` option:
@ -162,7 +162,7 @@ sudo bin/elasticsearch-plugin list
Alternatively, use the {ref}/cluster-nodes-info.html[node-info API] to find
out which plugins are installed on each node in the cluster
[float]
[discrete]
=== Removing plugins
Plugins can be removed manually, by deleting the appropriate directory under
@ -182,7 +182,7 @@ purge the configuration files while removing a plugin, use `-p` or `--purge`.
This can option can be used after a plugin is removed to remove any lingering
configuration files.
[float]
[discrete]
=== Updating plugins
Plugins are built for a specific version of Elasticsearch, and therefore must be reinstalled
@ -198,7 +198,7 @@ sudo bin/elasticsearch-plugin install [pluginname]
The `plugin` scripts supports a number of other command line parameters:
[float]
[discrete]
=== Silent/Verbose mode
The `--verbose` parameter outputs more debug information, while the `--silent`
@ -211,7 +211,7 @@ return the following exit codes:
`74`:: IO error
`70`:: any other error
[float]
[discrete]
=== Batch mode
Certain plugins require more privileges than those provided by default in core
@ -229,7 +229,7 @@ mode can be forced by specifying `-b` or `--batch` as follows:
sudo bin/elasticsearch-plugin install --batch [pluginname]
-----------------------------------
[float]
[discrete]
=== Custom config directory
If your `elasticsearch.yml` config file is in a custom location, you will need
@ -241,7 +241,7 @@ can do this as follows:
sudo ES_PATH_CONF=/path/to/conf/dir bin/elasticsearch-plugin install <plugin name>
---------------------
[float]
[discrete]
=== Proxy settings
To install a plugin via a proxy, you can add the proxy details to the

View File

@ -78,7 +78,7 @@ include::repository-shared-settings.asciidoc[]
link:repository-hdfs-security-runtime[Creating the Secure Repository]).
[[repository-hdfs-availability]]
[float]
[discrete]
===== A Note on HDFS Availability
When you initialize a repository, its settings are persisted in the cluster state. When a node comes online, it will
attempt to initialize all repositories for which it has settings. If your cluster has an HDFS repository configured, then
@ -106,7 +106,7 @@ methods are supported by the plugin:
<<repository-hdfs-security-runtime>> for more info)
[[repository-hdfs-security-keytabs]]
[float]
[discrete]
===== Principals and Keytabs
Before attempting to connect to a secured HDFS cluster, provision the Kerberos principals and keytabs that the
Elasticsearch nodes will use for authenticating to Kerberos. For maximum security and to avoid tripping up the Kerberos
@ -137,7 +137,7 @@ host!
// Setup at runtime (principal name)
[[repository-hdfs-security-runtime]]
[float]
[discrete]
===== Creating the Secure Repository
Once your keytab files are in place and your cluster is started, creating a secured HDFS repository is simple. Just
add the name of the principal that you will be authenticating as in the repository settings under the
@ -175,7 +175,7 @@ PUT _snapshot/my_hdfs_repository
// TEST[skip:we don't have hdfs set up while testing this]
[[repository-hdfs-security-authorization]]
[float]
[discrete]
===== Authorization
Once Elasticsearch is connected and authenticated to HDFS, HDFS will infer a username to use for
authorizing file access for the client. By default, it picks this username from the primary part of

View File

@ -200,7 +200,7 @@ pattern then you should set this setting to `true` when upgrading.
https://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/ClientConfiguration.html#setSignerOverride-java.lang.String-[AWS
Java SDK documentation] for details. Defaults to empty string which means that no signing algorithm override will be used.
[float]
[discrete]
[[repository-s3-compatible-services]]
===== S3-compatible services
@ -435,7 +435,7 @@ The bucket needs to exist to register a repository for snapshots. If you did not
create the bucket then the repository registration will fail.
[[repository-s3-aws-vpc]]
[float]
[discrete]
==== AWS VPC Bandwidth Settings
AWS instances resolve S3 endpoints to a public IP. If the Elasticsearch

View File

@ -5,7 +5,7 @@ Repository plugins extend the {ref}/modules-snapshots.html[Snapshot/Restore]
functionality in Elasticsearch by adding repositories backed by the cloud or
by distributed file systems:
[float]
[discrete]
==== Core repository plugins
The core repository plugins are:
@ -27,7 +27,7 @@ The Hadoop HDFS Repository plugin adds support for using HDFS as a repository.
The GCS repository plugin adds support for using Google Cloud Storage service as a repository.
[float]
[discrete]
=== Community contributed repository plugins
The following plugin has been contributed by our community:

View File

@ -3,7 +3,7 @@
Security plugins add a security layer to Elasticsearch.
[float]
[discrete]
=== Core security plugins
The core security plugins are:
@ -15,7 +15,7 @@ enterprise-grade security to their Elastic Stack. Designed to address the
growing security needs of thousands of enterprises using the Elastic Stack
today, X-Pack provides peace of mind when it comes to protecting your data.
[float]
[discrete]
=== Community contributed security plugins
The following plugins have been contributed by our community:

View File

@ -3,7 +3,7 @@
Store plugins offer alternatives to default Lucene stores.
[float]
[discrete]
=== Core store plugins
The core store plugins are:

View File

@ -44,7 +44,7 @@ NOTE: Aggregations operate on the `double` representation of
the data. As a consequence, the result may be approximate when running on longs
whose absolute value is greater than `2^53`.
[float]
[discrete]
== Structuring Aggregations
The following snippet captures the basic structure of aggregations:
@ -76,7 +76,7 @@ sub-aggregations you define on the bucketing aggregation level will be computed
bucketing aggregation. For example, if you define a set of aggregations under the `range` aggregation, the
sub-aggregations will be computed for the range buckets that are defined.
[float]
[discrete]
=== Values Source
Some aggregations work on values extracted from the aggregated documents. Typically, the values will be extracted from

View File

@ -26,7 +26,7 @@ NOTE: Because pipeline aggregations only add to the output, when chaining pipeli
will be included in the final output.
[[buckets-path-syntax]]
[float]
[discrete]
=== `buckets_path` Syntax
Most pipeline aggregations require another aggregation as their input. The input aggregation is defined via the `buckets_path`
@ -158,7 +158,7 @@ POST /_search
<1> `buckets_path` selects the hats and bags buckets (via `['hat']`/`['bag']``) to use in the script specifically,
instead of fetching all the buckets from `sale_type` aggregation
[float]
[discrete]
=== Special Paths
Instead of pathing to a metric, `buckets_path` can use a special `"_count"` path. This instructs
@ -229,7 +229,7 @@ POST /sales/_search
for the `categories` aggregation
[[dots-in-agg-names]]
[float]
[discrete]
=== Dealing with dots in agg names
An alternate syntax is supported to cope with aggregations or metrics which
@ -244,7 +244,7 @@ may be referred to as:
// NOTCONSOLE
[[gap-policy]]
[float]
[discrete]
=== Dealing with gaps in the data
Data in the real world is often noisy and sometimes contains *gaps* -- places where data simply doesn't exist. This can

View File

@ -11,7 +11,7 @@ _Text analysis_ is the process of converting unstructured text, like
the body of an email or a product description, into a structured format that's
optimized for search.
[float]
[discrete]
[[when-to-configure-analysis]]
=== When to configure text analysis
@ -29,7 +29,7 @@ analysis configuration if you're using {es} to:
* Fine-tune search for a specific language
* Perform lexicographic or linguistic research
[float]
[discrete]
[[analysis-toc]]
=== In this section

View File

@ -45,7 +45,7 @@ Elasticsearch provides many language-specific analyzers like `english` or
The `fingerprint` analyzer is a specialist analyzer which creates a
fingerprint which can be used for duplicate detection.
[float]
[discrete]
=== Custom analyzers
If you do not find an analyzer suitable for your needs, you can create a

View File

@ -8,7 +8,7 @@ When the built-in analyzers do not fulfill your needs, you can create a
* a <<analysis-tokenizers,tokenizer>>
* zero or more <<analysis-tokenfilters,token filters>>.
[float]
[discrete]
=== Configuration
The `custom` analyzer accepts the following parameters:
@ -36,7 +36,7 @@ The `custom` analyzer accepts the following parameters:
ensure that a phrase query doesn't match two terms from different array
elements. Defaults to `100`. See <<position-increment-gap>> for more.
[float]
[discrete]
=== Example configuration
Here is an example that combines the following:

View File

@ -12,7 +12,7 @@ Input text is lowercased, normalized to remove extended characters, sorted,
deduplicated and concatenated into a single token. If a stopword list is
configured, stop words will also be removed.
[float]
[discrete]
=== Example output
[source,console]
@ -51,7 +51,7 @@ The above sentence would produce the following single term:
[ and consistent godel is said sentence this yes ]
---------------------------
[float]
[discrete]
=== Configuration
The `fingerprint` analyzer accepts the following parameters:
@ -79,7 +79,7 @@ See the <<analysis-stop-tokenfilter,Stop Token Filter>> for more information
about stop word configuration.
[float]
[discrete]
=== Example configuration
In this example, we configure the `fingerprint` analyzer to use the
@ -135,7 +135,7 @@ The above example produces the following term:
[ consistent godel said sentence yes ]
---------------------------
[float]
[discrete]
=== Definition
The `fingerprint` tokenizer consists of:

View File

@ -7,7 +7,7 @@
The `keyword` analyzer is a ``noop'' analyzer which returns the entire input
string as a single token.
[float]
[discrete]
=== Example output
[source,console]
@ -46,12 +46,12 @@ The above sentence would produce the following single term:
[ The 2 QUICK Brown-Foxes jumped over the lazy dog's bone. ]
---------------------------
[float]
[discrete]
=== Configuration
The `keyword` analyzer is not configurable.
[float]
[discrete]
=== Definition
The `keyword` analyzer consists of:

View File

@ -22,7 +22,7 @@ Read more about http://www.regular-expressions.info/catastrophic.html[pathologic
========================================
[float]
[discrete]
=== Example output
[source,console]
@ -138,7 +138,7 @@ The above sentence would produce the following terms:
[ the, 2, quick, brown, foxes, jumped, over, the, lazy, dog, s, bone ]
---------------------------
[float]
[discrete]
=== Configuration
The `pattern` analyzer accepts the following parameters:
@ -170,7 +170,7 @@ See the <<analysis-stop-tokenfilter,Stop Token Filter>> for more information
about stop word configuration.
[float]
[discrete]
=== Example configuration
In this example, we configure the `pattern` analyzer to split email addresses
@ -258,7 +258,7 @@ The above example produces the following terms:
[ john, smith, foo, bar, com ]
---------------------------
[float]
[discrete]
==== CamelCase tokenizer
The following more complicated example splits CamelCase text into tokens:
@ -363,7 +363,7 @@ The regex above is easier to understand as:
)
--------------------------------------------------
[float]
[discrete]
=== Definition
The `pattern` anlayzer consists of:

View File

@ -10,7 +10,7 @@ Segmentation algorithm, as specified in
http://unicode.org/reports/tr29/[Unicode Standard Annex #29]) and works well
for most languages.
[float]
[discrete]
=== Example output
[source,console]
@ -119,7 +119,7 @@ The above sentence would produce the following terms:
[ the, 2, quick, brown, foxes, jumped, over, the, lazy, dog's, bone ]
---------------------------
[float]
[discrete]
=== Configuration
The `standard` analyzer accepts the following parameters:
@ -143,7 +143,7 @@ See the <<analysis-stop-tokenfilter,Stop Token Filter>> for more information
about stop word configuration.
[float]
[discrete]
=== Example configuration
In this example, we configure the `standard` analyzer to have a
@ -263,7 +263,7 @@ The above example produces the following terms:
[ 2, quick, brown, foxes, jumpe, d, over, lazy, dog's, bone ]
---------------------------
[float]
[discrete]
=== Definition
The `standard` analyzer consists of:

View File

@ -8,7 +8,7 @@ The `stop` analyzer is the same as the <<analysis-simple-analyzer,`simple` analy
but adds support for removing stop words. It defaults to using the
`_english_` stop words.
[float]
[discrete]
=== Example output
[source,console]
@ -103,7 +103,7 @@ The above sentence would produce the following terms:
[ quick, brown, foxes, jumped, over, lazy, dog, s, bone ]
---------------------------
[float]
[discrete]
=== Configuration
The `stop` analyzer accepts the following parameters:
@ -123,7 +123,7 @@ The `stop` analyzer accepts the following parameters:
See the <<analysis-stop-tokenfilter,Stop Token Filter>> for more information
about stop word configuration.
[float]
[discrete]
=== Example configuration
In this example, we configure the `stop` analyzer to use a specified list of
@ -228,7 +228,7 @@ The above example produces the following terms:
[ quick, brown, foxes, jumped, lazy, dog, s, bone ]
---------------------------
[float]
[discrete]
=== Definition
It consists of:

View File

@ -7,7 +7,7 @@
The `whitespace` analyzer breaks text into terms whenever it encounters a
whitespace character.
[float]
[discrete]
=== Example output
[source,console]
@ -109,12 +109,12 @@ The above sentence would produce the following terms:
[ The, 2, QUICK, Brown-Foxes, jumped, over, the, lazy, dog's, bone. ]
---------------------------
[float]
[discrete]
=== Configuration
The `whitespace` analyzer is not configurable.
[float]
[discrete]
=== Definition
It consists of:

View File

@ -22,7 +22,7 @@ Read more about http://www.regular-expressions.info/catastrophic.html[pathologic
========================================
[float]
[discrete]
=== Configuration
The `pattern_replace` character filter accepts the following parameters:
@ -43,7 +43,7 @@ The `pattern_replace` character filter accepts the following parameters:
Java regular expression http://docs.oracle.com/javase/8/docs/api/java/util/regex/Pattern.html#field.summary[flags].
Flags should be pipe-separated, eg `"CASE_INSENSITIVE|COMMENTS"`.
[float]
[discrete]
=== Example configuration
In this example, we configure the `pattern_replace` character filter to

View File

@ -16,7 +16,7 @@ following: `arabic_normalization`, `asciifolding`, `bengali_normalization`,
Elasticsearch ships with a `lowercase` built-in normalizer. For other forms of
normalization a custom configuration is required.
[float]
[discrete]
=== Custom normalizers
Custom normalizers take a list of

View File

@ -11,7 +11,7 @@ output tokens at the same position will be removed.
WARNING: If the incoming token stream has duplicate tokens, then these will also be
removed by the multiplexer
[float]
[discrete]
=== Options
[horizontal]
filters:: a list of token filters to apply to incoming tokens. These can be any
@ -27,7 +27,7 @@ preserve_original:: if `true` (the default) then emit the original token in
addition to the filtered tokens
[float]
[discrete]
=== Settings example
You can set it up like:

View File

@ -94,7 +94,7 @@ set to `false` no mapping would get added as when `expand=false` the target mapp
`expand=true` then the mappings added would be equivalent to `foo, baz => foo, baz` i.e, all mappings other than the
stop word.
[float]
[discrete]
[[synonym-graph-tokenizer-ignore_case-deprecated]]
==== `tokenizer` and `ignore_case` are deprecated
@ -104,7 +104,7 @@ The `ignore_case` parameter works with `tokenizer` parameter only.
Two synonym formats are supported: Solr, WordNet.
[float]
[discrete]
==== Solr synonyms
The following is a sample format of the file:
@ -142,7 +142,7 @@ PUT /test_index
However, it is recommended to define large synonyms set in a file using
`synonyms_path`, because specifying them inline increases cluster size unnecessarily.
[float]
[discrete]
==== WordNet synonyms
Synonyms based on http://wordnet.princeton.edu/[WordNet] format can be
@ -175,7 +175,7 @@ PUT /test_index
Using `synonyms_path` to define WordNet synonyms in a file is supported
as well.
[float]
[discrete]
==== Parsing synonym files
Elasticsearch will use the token filters preceding the synonym filter

View File

@ -85,7 +85,7 @@ set to `false` no mapping would get added as when `expand=false` the target mapp
stop word.
[float]
[discrete]
[[synonym-tokenizer-ignore_case-deprecated]]
==== `tokenizer` and `ignore_case` are deprecated
@ -95,7 +95,7 @@ The `ignore_case` parameter works with `tokenizer` parameter only.
Two synonym formats are supported: Solr, WordNet.
[float]
[discrete]
==== Solr synonyms
The following is a sample format of the file:
@ -133,7 +133,7 @@ PUT /test_index
However, it is recommended to define large synonyms set in a file using
`synonyms_path`, because specifying them inline increases cluster size unnecessarily.
[float]
[discrete]
==== WordNet synonyms
Synonyms based on http://wordnet.princeton.edu/[WordNet] format can be

View File

@ -18,7 +18,7 @@ represents (used for highlighting search snippets).
Elasticsearch has a number of built in tokenizers which can be used to build
<<analysis-custom-analyzer,custom analyzers>>.
[float]
[discrete]
=== Word Oriented Tokenizers
The following tokenizers are usually used for tokenizing full text into
@ -59,7 +59,7 @@ The `classic` tokenizer is a grammar based tokenizer for the English Language.
The `thai` tokenizer segments Thai text into words.
[float]
[discrete]
=== Partial Word Tokenizers
These tokenizers break up text or words into small fragments, for partial word
@ -80,7 +80,7 @@ n-grams of each word which are anchored to the start of the word, e.g. `quick` -
`[q, qu, qui, quic, quick]`.
[float]
[discrete]
=== Structured Text Tokenizers
The following tokenizers are usually used with structured text like

View File

@ -9,7 +9,7 @@ character which is in a defined set. It is mostly useful for cases where a simpl
custom tokenization is desired, and the overhead of use of the <<analysis-pattern-tokenizer, `pattern` tokenizer>>
is not acceptable.
[float]
[discrete]
=== Configuration
The `char_group` tokenizer accepts one parameter:
@ -26,7 +26,7 @@ The `char_group` tokenizer accepts one parameter:
it is split at `max_token_length` intervals. Defaults to `255`.
[float]
[discrete]
=== Example output
[source,console]

View File

@ -18,7 +18,7 @@ languages other than English:
* It recognizes email addresses and internet hostnames as one token.
[float]
[discrete]
=== Example output
[source,console]
@ -127,7 +127,7 @@ The above sentence would produce the following terms:
[ The, 2, QUICK, Brown, Foxes, jumped, over, the, lazy, dog's, bone ]
---------------------------
[float]
[discrete]
=== Configuration
The `classic` tokenizer accepts the following parameters:
@ -138,7 +138,7 @@ The `classic` tokenizer accepts the following parameters:
The maximum token length. If a token is seen that exceeds this length then
it is split at `max_token_length` intervals. Defaults to `255`.
[float]
[discrete]
=== Example configuration
In this example, we configure the `classic` tokenizer to have a

View File

@ -17,7 +17,7 @@ order, such as movie or song titles, the
choice than edge N-grams. Edge N-grams have the advantage when trying to
autocomplete words that can appear in any order.
[float]
[discrete]
=== Example output
With the default settings, the `edge_ngram` tokenizer treats the initial text as a
@ -70,7 +70,7 @@ The above sentence would produce the following terms:
NOTE: These default gram lengths are almost entirely useless. You need to
configure the `edge_ngram` before using it.
[float]
[discrete]
=== Configuration
The `edge_ngram` tokenizer accepts the following parameters:
@ -108,7 +108,7 @@ Character classes may be any of the following:
setting this to `+-_` will make the tokenizer treat the plus, minus and
underscore sign as part of a token.
[float]
[discrete]
[[max-gram-limits]]
=== Limitations of the `max_gram` parameter
@ -133,7 +133,7 @@ and `apple`.
We recommend testing both approaches to see which best fits your
use case and desired search experience.
[float]
[discrete]
=== Example configuration
In this example, we configure the `edge_ngram` tokenizer to treat letters and

View File

@ -8,7 +8,7 @@ The `keyword` tokenizer is a ``noop'' tokenizer that accepts whatever text it
is given and outputs the exact same text as a single term. It can be combined
with token filters to normalise output, e.g. lower-casing email addresses.
[float]
[discrete]
=== Example output
[source,console]
@ -95,7 +95,7 @@ The request produces the following token:
---------------------------
[float]
[discrete]
=== Configuration
The `keyword` tokenizer accepts the following parameters:

View File

@ -9,7 +9,7 @@ character which is not a letter. It does a reasonable job for most European
languages, but does a terrible job for some Asian languages, where words are
not separated by spaces.
[float]
[discrete]
=== Example output
[source,console]
@ -118,7 +118,7 @@ The above sentence would produce the following terms:
[ The, QUICK, Brown, Foxes, jumped, over, the, lazy, dog, s, bone ]
---------------------------
[float]
[discrete]
=== Configuration
The `letter` tokenizer is not configurable.

View File

@ -13,7 +13,7 @@ lowercases all terms. It is functionally equivalent to the
efficient as it performs both steps in a single pass.
[float]
[discrete]
=== Example output
[source,console]
@ -122,7 +122,7 @@ The above sentence would produce the following terms:
[ the, quick, brown, foxes, jumped, over, the, lazy, dog, s, bone ]
---------------------------
[float]
[discrete]
=== Configuration
The `lowercase` tokenizer is not configurable.

View File

@ -13,7 +13,7 @@ N-grams are like a sliding window that moves across the word - a continuous
sequence of characters of the specified length. They are useful for querying
languages that don't use spaces or that have long compound words, like German.
[float]
[discrete]
=== Example output
With the default settings, the `ngram` tokenizer treats the initial text as a
@ -168,7 +168,7 @@ The above sentence would produce the following terms:
[ Q, Qu, u, ui, i, ic, c, ck, k, "k ", " ", " F", F, Fo, o, ox, x ]
---------------------------
[float]
[discrete]
=== Configuration
The `ngram` tokenizer accepts the following parameters:
@ -210,7 +210,7 @@ matches. A tri-gram (length `3`) is a good place to start.
The index level setting `index.max_ngram_diff` controls the maximum allowed
difference between `max_gram` and `min_gram`.
[float]
[discrete]
=== Example configuration
In this example, we configure the `ngram` tokenizer to treat letters and

View File

@ -8,7 +8,7 @@ The `path_hierarchy` tokenizer takes a hierarchical value like a filesystem
path, splits on the path separator, and emits a term for each component in the
tree.
[float]
[discrete]
=== Example output
[source,console]
@ -62,7 +62,7 @@ The above text would produce the following terms:
[ /one, /one/two, /one/two/three ]
---------------------------
[float]
[discrete]
=== Configuration
The `path_hierarchy` tokenizer accepts the following parameters:
@ -86,7 +86,7 @@ The `path_hierarchy` tokenizer accepts the following parameters:
`skip`::
The number of initial tokens to skip. Defaults to `0`.
[float]
[discrete]
=== Example configuration
In this example, we configure the `path_hierarchy` tokenizer to split on `-`

View File

@ -25,7 +25,7 @@ Read more about http://www.regular-expressions.info/catastrophic.html[pathologic
========================================
[float]
[discrete]
=== Example output
[source,console]
@ -99,7 +99,7 @@ The above sentence would produce the following terms:
[ The, foo_bar_size, s, default, is, 5 ]
---------------------------
[float]
[discrete]
=== Configuration
The `pattern` tokenizer accepts the following parameters:
@ -118,7 +118,7 @@ The `pattern` tokenizer accepts the following parameters:
Which capture group to extract as tokens. Defaults to `-1` (split).
[float]
[discrete]
=== Example configuration
In this example, we configure the `pattern` tokenizer to break text into

View File

@ -20,7 +20,7 @@ For an explanation of the supported features and syntax, see <<regexp-syntax,Reg
The default pattern is the empty string, which produces no terms. This
tokenizer should always be configured with a non-default pattern.
[float]
[discrete]
=== Configuration
The `simple_pattern` tokenizer accepts the following parameters:
@ -29,7 +29,7 @@ The `simple_pattern` tokenizer accepts the following parameters:
`pattern`::
{lucene-core-javadoc}/org/apache/lucene/util/automaton/RegExp.html[Lucene regular expression], defaults to the empty string.
[float]
[discrete]
=== Example configuration
This example configures the `simple_pattern` tokenizer to produce terms that are

View File

@ -21,7 +21,7 @@ The default pattern is the empty string, which produces one term containing the
full input. This tokenizer should always be configured with a non-default
pattern.
[float]
[discrete]
=== Configuration
The `simple_pattern_split` tokenizer accepts the following parameters:
@ -30,7 +30,7 @@ The `simple_pattern_split` tokenizer accepts the following parameters:
`pattern`::
A {lucene-core-javadoc}/org/apache/lucene/util/automaton/RegExp.html[Lucene regular expression], defaults to the empty string.
[float]
[discrete]
=== Example configuration
This example configures the `simple_pattern_split` tokenizer to split the input

View File

@ -9,7 +9,7 @@ Unicode Text Segmentation algorithm, as specified in
http://unicode.org/reports/tr29/[Unicode Standard Annex #29]) and works well
for most languages.
[float]
[discrete]
=== Example output
[source,console]
@ -118,7 +118,7 @@ The above sentence would produce the following terms:
[ The, 2, QUICK, Brown, Foxes, jumped, over, the, lazy, dog's, bone ]
---------------------------
[float]
[discrete]
=== Configuration
The `standard` tokenizer accepts the following parameters:
@ -129,7 +129,7 @@ The `standard` tokenizer accepts the following parameters:
The maximum token length. If a token is seen that exceeds this length then
it is split at `max_token_length` intervals. Defaults to `255`.
[float]
[discrete]
=== Example configuration
In this example, we configure the `standard` tokenizer to have a

View File

@ -13,7 +13,7 @@ WARNING: This tokenizer may not be supported by all JREs. It is known to work
with Sun/Oracle and OpenJDK. If your application needs to be fully portable,
consider using the {plugins}/analysis-icu-tokenizer.html[ICU Tokenizer] instead.
[float]
[discrete]
=== Example output
[source,console]
@ -101,7 +101,7 @@ The above sentence would produce the following terms:
[ การ, ที่, ได้, ต้อง, แสดง, ว่า, งาน, ดี ]
---------------------------
[float]
[discrete]
=== Configuration
The `thai` tokenizer is not configurable.

View File

@ -7,7 +7,7 @@
The `uax_url_email` tokenizer is like the <<analysis-standard-tokenizer,`standard` tokenizer>> except that it
recognises URLs and email addresses as single tokens.
[float]
[discrete]
=== Example output
[source,console]
@ -74,7 +74,7 @@ while the `standard` tokenizer would produce:
[ Email, me, at, john.smith, global, international.com ]
---------------------------
[float]
[discrete]
=== Configuration
The `uax_url_email` tokenizer accepts the following parameters:
@ -85,7 +85,7 @@ The `uax_url_email` tokenizer accepts the following parameters:
The maximum token length. If a token is seen that exceeds this length then
it is split at `max_token_length` intervals. Defaults to `255`.
[float]
[discrete]
=== Example configuration
In this example, we configure the `uax_url_email` tokenizer to have a

View File

@ -7,7 +7,7 @@
The `whitespace` tokenizer breaks text into terms whenever it encounters a
whitespace character.
[float]
[discrete]
=== Example output
[source,console]
@ -109,7 +109,7 @@ The above sentence would produce the following terms:
[ The, 2, QUICK, Brown-Foxes, jumped, over, the, lazy, dog's, bone. ]
---------------------------
[float]
[discrete]
=== Configuration
The `whitespace` tokenizer accepts the following parameters:

View File

@ -160,7 +160,7 @@ include::rest-api/cron-expressions.asciidoc[]
The following options can be applied to all of the REST APIs.
[float]
[discrete]
==== Pretty Results
When appending `?pretty=true` to any request made, the JSON returned
@ -169,7 +169,7 @@ to set `?format=yaml` which will cause the result to be returned in the
(sometimes) more readable yaml format.
[float]
[discrete]
==== Human readable output
Statistics are returned in a format suitable for humans
@ -182,7 +182,7 @@ consumption. The default for the `human` flag is
`false`.
[[date-math]]
[float]
[discrete]
==== Date Math
Most parameters which accept a formatted date value -- such as `gt` and `lt`
@ -219,7 +219,7 @@ Assuming `now` is `2001-01-01 12:00:00`, some examples are:
`now-1h/d`:: `now` in milliseconds minus one hour, rounded down to UTC 00:00. Resolves to: `2001-01-01 00:00:00`
`2001.02.01\|\|+1M/d`:: `2001-02-01` in milliseconds plus one month. Resolves to: `2001-03-01 00:00:00`
[float]
[discrete]
[[common-options-response-filtering]]
==== Response Filtering
@ -376,7 +376,7 @@ GET /_search?filter_path=hits.hits._source&_source=title&sort=rating:desc
--------------------------------------------------
[float]
[discrete]
==== Flat Settings
The `flat_settings` flag affects rendering of the lists of settings. When the
@ -445,27 +445,27 @@ Returns:
By default `flat_settings` is set to `false`.
[float]
[discrete]
==== Parameters
Rest parameters (when using HTTP, map to HTTP URL parameters) follow the
convention of using underscore casing.
[float]
[discrete]
==== Boolean Values
All REST API parameters (both request parameters and JSON body) support
providing boolean "false" as the value `false` and boolean "true" as the
value `true`. All other values will raise an error.
[float]
[discrete]
==== Number Values
All REST APIs support providing numbered parameters as `string` on top
of supporting the native JSON number types.
[[time-units]]
[float]
[discrete]
==== Time units
Whenever durations need to be specified, e.g. for a `timeout` parameter, the duration must specify
@ -481,7 +481,7 @@ the unit, like `2d` for 2 days. The supported units are:
`nanos`:: Nanoseconds
[[byte-units]]
[float]
[discrete]
==== Byte size units
Whenever the byte size of data needs to be specified, e.g. when setting a buffer size
@ -497,7 +497,7 @@ these units use powers of 1024, so `1kb` means 1024 bytes. The supported units a
`pb`:: Petabytes
[[size-units]]
[float]
[discrete]
==== Unit-less quantities
Unit-less quantities means that they don't have a "unit" like "bytes" or "Hertz" or "meter" or "long tonne".
@ -513,7 +513,7 @@ when we mean 87 though. These are the supported multipliers:
`p`:: Peta
[[distance-units]]
[float]
[discrete]
==== Distance Units
Wherever distances need to be specified, such as the `distance` parameter in
@ -535,7 +535,7 @@ Millimeter:: `mm` or `millimeters`
Nautical mile:: `NM`, `nmi`, or `nauticalmiles`
[[fuzziness]]
[float]
[discrete]
==== Fuzziness
Some queries and APIs support parameters to allow inexact _fuzzy_ matching,
@ -567,7 +567,7 @@ the default values are 3 and 6, equivalent to `AUTO:3,6` that make for lengths:
`AUTO` should generally be the preferred value for `fuzziness`.
--
[float]
[discrete]
[[common-options-error-options]]
==== Enabling stack traces
@ -643,7 +643,7 @@ The response looks like:
// TESTRESPONSE[s/"stack_trace": "java.lang.IllegalArgum.+\.\.\."/"stack_trace": $body.error.stack_trace/]
// TESTRESPONSE[s/"stack_trace": "java.lang.Number.+\.\.\."/"stack_trace": $body.error.caused_by.stack_trace/]
[float]
[discrete]
==== Request body in query string
For libraries that don't accept a request body for non-POST requests,
@ -652,7 +652,7 @@ instead. When using this method, the `source_content_type` parameter
should also be passed with a media type value that indicates the format
of the source, such as `application/json`.
[float]
[discrete]
==== Content-Type Requirements
The type of the content sent in a request body must be specified using

View File

@ -5,7 +5,7 @@
You can use the following APIs to perform autoscaling operations.
[float]
[discrete]
[[autoscaling-api-top-level]]
=== Top-Level

View File

@ -21,11 +21,11 @@ All the cat commands accept a query string parameter `help` to see all
the headers and info they provide, and the `/_cat` command alone lists all
the available commands.
[float]
[discrete]
[[common-parameters]]
=== Common parameters
[float]
[discrete]
[[verbose]]
==== Verbose
@ -46,7 +46,7 @@ u_n93zwxThWHi1PDBJAGAg 127.0.0.1 127.0.0.1 u_n93zw
--------------------------------------------------
// TESTRESPONSE[s/u_n93zw(xThWHi1PDBJAGAg)?/.+/ non_json]
[float]
[discrete]
[[help]]
==== Help
@ -74,7 +74,7 @@ For example `GET _cat/shards/twitter?help` or `GET _cat/indices/twi*?help`
results in an error. Use `GET _cat/shards?help` or `GET _cat/indices?help`
instead.
[float]
[discrete]
[[headers]]
==== Headers
@ -98,7 +98,7 @@ You can also request multiple columns using simple wildcards like
`/_cat/thread_pool?h=ip,queue*` to get all headers (or aliases) starting
with `queue`.
[float]
[discrete]
[[numeric-formats]]
==== Numeric formats
@ -141,7 +141,7 @@ If you want to change the <<size-units,size units>>, use `size` parameter.
If you want to change the <<byte-units,byte units>>, use `bytes` parameter.
[float]
[discrete]
==== Response as text, json, smile, yaml or cbor
[source,sh]
@ -193,7 +193,7 @@ For example:
--------------------------------------------------
// NOTCONSOLE
[float]
[discrete]
[[sort]]
==== Sort

View File

@ -5,13 +5,13 @@
You can use the following APIs to perform {ccr} operations.
[float]
[discrete]
[[ccr-api-top-level]]
=== Top-Level
* <<ccr-get-stats,Get {ccr} stats>>
[float]
[discrete]
[[ccr-api-follow]]
=== Follow
@ -23,7 +23,7 @@ You can use the following APIs to perform {ccr} operations.
* <<ccr-get-follow-stats,Get stats about follower indices>>
* <<ccr-get-follow-info,Get info about follower indices>>
[float]
[discrete]
[[ccr-api-auto-follow]]
=== Auto-follow

View File

@ -10,7 +10,7 @@ authorities (CA), certificate signing requests (CSR), and signed certificates
for use with the Elastic Stack. Though this command is deprecated, you do not
need to replace CAs, CSRs, or certificates that it created.
[float]
[discrete]
=== Synopsis
[source,shell]
@ -23,7 +23,7 @@ bin/elasticsearch-certgen
([-s, --silent] | [-v, --verbose])
--------------------------------------------------
[float]
[discrete]
=== Description
By default, the command runs in interactive mode and you are prompted for
@ -54,7 +54,7 @@ organization-specific certificate authority to obtain signed certificates. The
signed certificates must be in PEM format to work with the {stack}
{security-features}.
[float]
[discrete]
=== Parameters
`--cert <cert_file>`:: Specifies to generate new instance certificates and keys
@ -103,10 +103,10 @@ which can be blank. This parameter cannot be used with the `-csr` parameter.
`-v, --verbose`:: Shows verbose output.
[float]
[discrete]
=== Examples
[float]
[discrete]
[[certgen-silent]]
==== Using `elasticsearch-certgen` in Silent Mode

View File

@ -6,7 +6,7 @@
The `elasticsearch-certutil` command simplifies the creation of certificates for
use with Transport Layer Security (TLS) in the {stack}.
[float]
[discrete]
=== Synopsis
[source,shell]
@ -32,14 +32,14 @@ bin/elasticsearch-certutil
[-h, --help] ([-s, --silent] | [-v, --verbose])
--------------------------------------------------
[float]
[discrete]
=== Description
You can specify one of the following modes: `ca`, `cert`, `csr`, `http`. The
`elasticsearch-certutil` command also supports a silent mode of operation to
enable easier batch operations.
[float]
[discrete]
[[certutil-ca]]
==== CA mode
@ -51,7 +51,7 @@ format.
You can subsequently use these files as input for the `cert` mode of the command.
[float]
[discrete]
[[certutil-cert]]
==== CERT mode
@ -90,7 +90,7 @@ certificates and keys and packages them into a zip file.
If you specify the `--keep-ca-key`, `--multiple` or `--in` parameters,
the command produces a zip file containing the generated certificates and keys.
[float]
[discrete]
[[certutil-csr]]
==== CSR mode
@ -111,7 +111,7 @@ private keys for each instance. Each CSR is provided as a standard PEM
encoding of a PKCS#10 CSR. Each key is provided as a PEM encoding of an RSA
private key.
[float]
[discrete]
[[certutil-http]]
==== HTTP mode
@ -123,7 +123,7 @@ authority (CA), a certificate signing request (CSR), or certificates and keys
for use in {es} and {kib}. Each folder in the zip file contains a readme that
explains how to use the files.
[float]
[discrete]
=== Parameters
`ca`:: Specifies to generate a new local certificate authority (CA). This
@ -214,7 +214,7 @@ parameter cannot be used with the `csr` parameter.
`-v, --verbose`:: Shows verbose output.
[float]
[discrete]
=== Examples
The following command generates a CA certificate and private key in PKCS#12
@ -244,7 +244,7 @@ which you can copy to the relevant configuration directory for each Elastic
product that you want to configure. For more information, see
<<ssl-tls>>.
[float]
[discrete]
[[certutil-silent]]
==== Using `elasticsearch-certutil` in Silent Mode

View File

@ -10,7 +10,7 @@ to the native realm. From 5.0 onward, you should use the `native` realm to
manage roles and local users.
[float]
[discrete]
=== Synopsis
[source,shell]
@ -23,7 +23,7 @@ bin/elasticsearch-migrate
[-s, --silent] [-v, --verbose])
--------------------------------------------------
[float]
[discrete]
=== Description
NOTE: When migrating from Shield 2.x, the `elasticsearch-migrate` tool should be
@ -38,7 +38,7 @@ roles that already exist in the `native` realm are not replaced or
overridden. If the names you specify with the `--users` and `--roles` options
don't exist in the `file` realm, they are skipped.
[float]
[discrete]
[[migrate-tool-options]]
=== Parameters
The `native` subcommand supports the following options:
@ -73,7 +73,7 @@ Username to use for authentication with {es}.
`-v, --verbose`:: Shows verbose output.
[float]
[discrete]
=== Examples
Run the `elasticsearch-migrate` tool when {xpack} is installed. For example:

View File

@ -7,7 +7,7 @@ allows you to adjust the <<modules-node,role>> of a node, unsafely edit cluster
settings and may be able to recover some data after a disaster or start a node
even if it is incompatible with the data on disk.
[float]
[discrete]
=== Synopsis
[source,shell]
@ -17,7 +17,7 @@ bin/elasticsearch-node repurpose|unsafe-bootstrap|detach-cluster|override-versio
[-h, --help] ([-s, --silent] | [-v, --verbose])
--------------------------------------------------
[float]
[discrete]
=== Description
This tool has a number of modes:
@ -51,7 +51,7 @@ This tool has a number of modes:
{es}.
[[node-tool-repurpose]]
[float]
[discrete]
==== Changing the role of a node
There may be situations where you want to repurpose a node without following
@ -83,7 +83,7 @@ The tool provides a summary of the data to be deleted and asks for confirmation
before making any changes. You can get detailed information about the affected
indices and shards by passing the verbose (`-v`) option.
[float]
[discrete]
==== Removing persistent cluster settings
There may be situations where a node contains persistent cluster
@ -103,7 +103,7 @@ The intended use is:
* Repeat for all other master-eligible nodes
* Start the nodes
[float]
[discrete]
==== Removing custom metadata from the cluster state
There may be situations where a node contains custom metadata, typically
@ -121,7 +121,7 @@ The intended use is:
* Repeat for all other master-eligible nodes
* Start the nodes
[float]
[discrete]
==== Recovering data after a disaster
Sometimes {es} nodes are temporarily stopped, perhaps because of the need to
@ -161,7 +161,7 @@ way forward that does not risk data loss, but it may be possible to use the
data from the failed cluster.
[[node-tool-override-version]]
[float]
[discrete]
==== Bypassing version checks
The data that {es} writes to disk is designed to be read by the current version
@ -181,7 +181,7 @@ tool to overwrite the version number stored in the data path with the current
version, causing {es} to believe that it is compatible with the on-disk data.
[[node-tool-unsafe-bootstrap]]
[float]
[discrete]
===== Unsafe cluster bootstrapping
If there is at least one remaining master-eligible node, but it is not possible
@ -256,7 +256,7 @@ there has been no data loss, it just means that tool was able to complete its
job.
[[node-tool-detach-cluster]]
[float]
[discrete]
===== Detaching nodes from their cluster
It is unsafe for nodes to move between clusters, because different clusters
@ -321,7 +321,7 @@ that there has been no data loss, it just means that tool was able to complete
its job.
[float]
[discrete]
=== Parameters
`repurpose`:: Delete excess data when a node's roles are changed.
@ -350,10 +350,10 @@ to `0`, meaning to use the first node in the data path.
`-v, --verbose`:: Shows verbose output.
[float]
[discrete]
=== Examples
[float]
[discrete]
==== Repurposing a node as a dedicated master node
In this example, a former data node is repurposed as a dedicated master node.
@ -375,7 +375,7 @@ Confirm [y/N] y
Node successfully repurposed to master and no-data.
----
[float]
[discrete]
==== Repurposing a node as a coordinating-only node
In this example, a node that previously held data is repurposed as a
@ -398,7 +398,7 @@ Confirm [y/N] y
Node successfully repurposed to no-master and no-data.
----
[float]
[discrete]
==== Removing persistent cluster settings
If your nodes contain persistent cluster settings that prevent the cluster
@ -432,7 +432,7 @@ You can also use wildcards to remove multiple settings, for example using
node$ ./bin/elasticsearch-node remove-settings xpack.monitoring.*
----
[float]
[discrete]
==== Removing custom metadata from the cluster state
If the on-disk cluster state contains custom metadata that prevents the node
@ -459,7 +459,7 @@ Confirm [y/N] y
Customs were successfully removed from the cluster state
----
[float]
[discrete]
==== Unsafe cluster bootstrapping
Suppose your cluster had five master-eligible nodes and you have permanently
@ -535,7 +535,7 @@ Confirm [y/N] y
Master node was successfully bootstrapped
----
[float]
[discrete]
==== Detaching nodes from their cluster
After unsafely bootstrapping a new cluster, run the `elasticsearch-node
@ -561,7 +561,7 @@ Confirm [y/N] y
Node was successfully detached from the cluster
----
[float]
[discrete]
==== Bypassing version checks
Run the `elasticsearch-node override-version` command to overwrite the version

View File

@ -6,7 +6,7 @@
The `elasticsearch-saml-metadata` command can be used to generate a SAML 2.0 Service
Provider Metadata file.
[float]
[discrete]
=== Synopsis
[source,shell]
@ -23,7 +23,7 @@ bin/elasticsearch-saml-metadata
[-h, --help] ([-s, --silent] | [-v, --verbose])
--------------------------------------------------
[float]
[discrete]
=== Description
The SAML 2.0 specification provides a mechanism for Service Providers to
@ -44,7 +44,7 @@ If your {es} keystore is password protected, you
are prompted to enter the password when you run the
`elasticsearch-saml-metadata` command.
[float]
[discrete]
=== Parameters
`--attribute <name>`:: Specifies a SAML attribute that should be
@ -107,7 +107,7 @@ realm in your {es} configuration.
`-v, --verbose`:: Shows verbose output.
[float]
[discrete]
=== Examples
The following command generates a default metadata file for the `saml1` realm:

View File

@ -6,7 +6,7 @@
The `elasticsearch-setup-passwords` command sets the passwords for the
<<built-in-users,built-in users>>.
[float]
[discrete]
=== Synopsis
[source,shell]
@ -16,7 +16,7 @@ bin/elasticsearch-setup-passwords auto|interactive
[-s, --silent] [-u, --url "<URL>"] [-v, --verbose]
--------------------------------------------------
[float]
[discrete]
=== Description
This command is intended for use only during the initial configuration of the
@ -40,7 +40,7 @@ override settings in your `elasticsearch.yml` file by using the `-E` command
option. For more information about debugging connection failures, see
<<trb-security-setup>>.
[float]
[discrete]
=== Parameters
`auto`:: Outputs randomly-generated passwords to the console.
@ -63,7 +63,7 @@ you must specify an HTTPS URL.
`-v, --verbose`:: Shows verbose output.
[float]
[discrete]
=== Examples
The following example uses the `-u` parameter to tell the tool where to submit

View File

@ -11,7 +11,7 @@ You will lose the corrupted data when you run `elasticsearch-shard`. This tool
should only be used as a last resort if there is no way to recover from another
copy of the shard or restore a snapshot.
[float]
[discrete]
=== Synopsis
[source,shell]
@ -23,7 +23,7 @@ bin/elasticsearch-shard remove-corrupted-data
[-h, --help] ([-s, --silent] | [-v, --verbose])
--------------------------------------------------
[float]
[discrete]
=== Description
When {es} detects that a shard's data is corrupted, it fails that shard copy and
@ -44,7 +44,7 @@ There are two ways to specify the path:
* Use the `--dir` option to specify the full path to the corrupted index or
translog files.
[float]
[discrete]
==== Removing corrupted data
`elasticsearch-shard` analyses the shard copy and provides an overview of the

View File

@ -6,7 +6,7 @@
The `elasticsearch-syskeygen` command creates a system key file in the
elasticsearch config directory.
[float]
[discrete]
=== Synopsis
[source,shell]
@ -16,7 +16,7 @@ bin/elasticsearch-syskeygen
([-s, --silent] | [-v, --verbose])
--------------------------------------------------
[float]
[discrete]
=== Description
The command generates a `system_key` file, which you can use to symmetrically
@ -27,7 +27,7 @@ from returning and storing information that contains clear text credentials. See
IMPORTANT: The system key is a symmetric key, so the same key must be used on
every node in the cluster.
[float]
[discrete]
=== Parameters
`-E <KeyValuePair>`:: Configures a setting. For example, if you have a custom
@ -41,7 +41,7 @@ environment variable.
`-v, --verbose`:: Shows verbose output.
[float]
[discrete]
=== Examples
The following command generates a `system_key` file in the

View File

@ -6,7 +6,7 @@
If you use file-based user authentication, the `elasticsearch-users` command
enables you to add and remove users, assign user roles, and manage passwords.
[float]
[discrete]
=== Synopsis
[source,shell]
@ -19,7 +19,7 @@ bin/elasticsearch-users
([userdel <username>])
--------------------------------------------------
[float]
[discrete]
=== Description
If you use the built-in `file` internal realm, users are defined in local files
@ -40,7 +40,7 @@ TIP: To ensure that {es} can read the user and role information at startup, run
command as root or some other user updates the permissions for the `users` and
`users_roles` files and prevents {es} from accessing them.
[float]
[discrete]
=== Parameters
`-a <roles>`:: If used with the `roles` parameter, adds a comma-separated list
@ -81,10 +81,10 @@ removing roles within the same command to change a user's roles.
//`-v, --verbose`:: Shows verbose output.
//[float]
//[discrete]
//=== Authorization
[float]
[discrete]
=== Examples
The following example adds a new user named `jacknich` to the `file` realm. The

View File

@ -89,7 +89,7 @@ Experiment with different settings to find the optimal size for your particular
When using the HTTP API, make sure that the client does not send HTTP chunks,
as this will slow things down.
[float]
[discrete]
[[bulk-clients]]
===== Client support for bulk requests
@ -116,7 +116,7 @@ JavaScript::
.NET::
See https://www.elastic.co/guide/en/elasticsearch/client/net-api/current/indexing-documents.html#bulkall-observable[`BulkAllObservable`]
[float]
[discrete]
[[bulk-curl]]
===== Submitting bulk requests with cURL
@ -135,7 +135,7 @@ $ curl -s -H "Content-Type: application/x-ndjson" -XPOST localhost:9200/_bulk --
// NOTCONSOLE
// Not converting to console because this shows how curl works
[float]
[discrete]
[[bulk-optimistic-concurrency-control]]
===== Optimistic Concurrency Control
@ -146,7 +146,7 @@ how operations are executed, based on the last modification to existing
documents. See <<optimistic-concurrency-control>> for more details.
[float]
[discrete]
[[bulk-versioning]]
===== Versioning
@ -155,7 +155,7 @@ Each bulk item can include the version value using the
index / delete operation based on the `_version` mapping. It also
support the `version_type` (see <<index-versioning, versioning>>).
[float]
[discrete]
[[bulk-routing]]
===== Routing
@ -166,7 +166,7 @@ index / delete operation based on the `_routing` mapping.
NOTE: Data streams do not support custom routing. Instead, target the
appropriate backing index for the stream.
[float]
[discrete]
[[bulk-wait-for-active-shards]]
===== Wait For Active Shards
@ -176,7 +176,7 @@ before starting to process the bulk request. See
<<index-wait-for-active-shards,here>> for further details and a usage
example.
[float]
[discrete]
[[bulk-refresh]]
===== Refresh
@ -190,7 +190,7 @@ with five shards. The request will only wait for those three shards to
refresh. The other two shards that make up the index do not
participate in the `_bulk` request at all.
[float]
[discrete]
[[bulk-security]]
===== Security
@ -537,7 +537,7 @@ The API returns the following result:
// TESTRESPONSE[s/"_seq_no" : 3/"_seq_no" : $body.items.3.update._seq_no/]
// TESTRESPONSE[s/"_primary_term" : 4/"_primary_term" : $body.items.3.update._primary_term/]
[float]
[discrete]
[[bulk-update]]
===== Bulk update example

View File

@ -2,7 +2,7 @@
[[docs-replication]]
=== Reading and Writing documents
[float]
[discrete]
==== Introduction
Each index in Elasticsearch is <<scalability,divided into shards>>
@ -53,7 +53,7 @@ encompasses the lifetime of each subsequent stage. For example, the coordinating
stage, which may be spread out across different primary shards, has completed. Each primary stage will not complete until the
in-sync replicas have finished indexing the docs locally and responded to the replica requests.
[float]
[discrete]
===== Failure handling
Many things can go wrong during indexing -- disks can get corrupted, nodes can be disconnected from each other, or some
@ -94,7 +94,7 @@ into the primary will not be lost. Of course, since at that point we are running
issues can cause data loss. See <<index-wait-for-active-shards>> for some mitigation options.
************
[float]
[discrete]
==== Basic read model
Reads in Elasticsearch can be very lightweight lookups by ID or a heavy search request with complex aggregations that
@ -112,7 +112,7 @@ is as follows:
. Send shard level read requests to the selected copies.
. Combine the results and respond. Note that in the case of get by ID look up, only one shard is relevant and this step can be skipped.
[float]
[discrete]
[[shard-failures]]
===== Shard failures
@ -132,7 +132,7 @@ Responses containing partial results still provide a `200 OK` HTTP status code.
Shard failures are indicated by the `timed_out` and `_shards` fields of
the response header.
[float]
[discrete]
==== A few simple implications
Each of these basic flows determines how Elasticsearch behaves as a system for both reads and writes. Furthermore, since read
@ -147,7 +147,7 @@ Read unacknowledged:: Since the primary first indexes locally and then replicate
Two copies by default:: This model can be fault tolerant while maintaining only two copies of the data. This is in contrast to
quorum-based system where the minimum number of copies for fault tolerance is 3.
[float]
[discrete]
==== Failures
Under failures, the following is possible:
@ -161,7 +161,7 @@ Dirty reads:: An isolated primary can expose writes that will not be acknowledge
At that point the operation is already indexed into the primary and can be read by a concurrent read. Elasticsearch mitigates
this risk by pinging the master every second (by default) and rejecting indexing operations if no master is known.
[float]
[discrete]
==== The Tip of the Iceberg
This document provides a high level overview of how Elasticsearch deals with data. Of course, there is much much more

View File

@ -410,7 +410,7 @@ POST twitter/_delete_by_query?scroll_size=5000
--------------------------------------------------
// TEST[setup:twitter]
[float]
[discrete]
[[docs-delete-by-query-manual-slice]]
===== Slice manually
@ -482,7 +482,7 @@ Which results in a sensible `total` like this one:
}
----------------------------------------------------------------
[float]
[discrete]
[[docs-delete-by-query-automatic-slice]]
===== Use automatic slicing
@ -565,7 +565,7 @@ being deleted.
* Each sub-request gets a slightly different snapshot of the source data stream or index
though these are all taken at approximately the same time.
[float]
[discrete]
[[docs-delete-by-query-rethrottle]]
===== Change throttling for a request
@ -657,7 +657,7 @@ and `wait_for_completion=false` was set on it then it'll come back with
you to delete that document.
[float]
[discrete]
[[docs-delete-by-query-cancel-task-api]]
===== Cancel a delete by query operation

View File

@ -21,7 +21,7 @@ NOTE: You cannot send deletion requests directly to a data stream. To delete a
document in a data stream, you must target the backing index containing the
document. See <<update-delete-docs-in-a-backing-index>>.
[float]
[discrete]
[[optimistic-concurrency-control-delete]]
===== Optimistic concurrency control
@ -31,7 +31,7 @@ term specified by the `if_seq_no` and `if_primary_term` parameters. If a
mismatch is detected, the operation will result in a `VersionConflictException`
and a status code of 409. See <<optimistic-concurrency-control>> for more details.
[float]
[discrete]
[[delete-versioning]]
===== Versioning
@ -44,7 +44,7 @@ short time after deletion to allow for control of concurrent operations. The
length of time for which a deleted document's version remains available is
determined by the `index.gc_deletes` index setting and defaults to 60 seconds.
[float]
[discrete]
[[delete-routing]]
===== Routing
@ -80,7 +80,7 @@ DELETE /twitter/_doc/1?routing=kimchy
This request deletes the tweet with id `1`, but it is routed based on the
user. The document is not deleted if the correct routing is not specified.
[float]
[discrete]
[[delete-index-creation]]
===== Automatic index creation
@ -89,7 +89,7 @@ the delete operation automatically creates the specified index if it does not
exist. For information about manually creating indices, see
<<indices-create-index,create index API>>.
[float]
[discrete]
[[delete-distributed]]
===== Distributed
@ -97,7 +97,7 @@ The delete operation gets hashed into a specific shard id. It then gets
redirected into the primary shard within that id group, and replicated
(if needed) to shard replicas within that id group.
[float]
[discrete]
[[delete-wait-for-active-shards]]
===== Wait for active shards
@ -107,14 +107,14 @@ before starting to process the delete request. See
<<index-wait-for-active-shards,here>> for further details and a usage
example.
[float]
[discrete]
[[delete-refresh]]
===== Refresh
Control when the changes made by this request are visible to search. See
<<docs-refresh>>.
[float]
[discrete]
[[delete-timeout]]
===== Timeout

View File

@ -30,7 +30,7 @@ particular index. Use HEAD to verify that a document exists. You can
use the `_source` resource retrieve just the document source or verify
that it exists.
[float]
[discrete]
[[realtime]]
===== Realtime
@ -41,7 +41,7 @@ has been updated but is not yet refreshed, the get API will have to parse
and analyze the source to extract the stored fields. In order to disable
realtime GET, the `realtime` parameter can be set to `false`.
[float]
[discrete]
[[get-source-filtering]]
===== Source filtering
@ -75,7 +75,7 @@ GET twitter/_doc/0?_source=*.id,retweeted
--------------------------------------------------
// TEST[setup:twitter]
[float]
[discrete]
[[get-routing]]
===== Routing
@ -91,7 +91,7 @@ GET twitter/_doc/2?routing=user1
This request gets the tweet with id `2`, but it is routed based on the
user. The document is not fetched if the correct routing is not specified.
[float]
[discrete]
[[preference]]
===== Preference
@ -112,7 +112,7 @@ Custom (string) value::
states. A sample value can be something like the web session id, or the
user name.
[float]
[discrete]
[[get-refresh]]
===== Refresh
@ -122,7 +122,7 @@ it to `true` should be done after careful thought and verification that
this does not cause a heavy load on the system (and slows down
indexing).
[float]
[discrete]
[[get-distributed]]
===== Distributed
@ -132,7 +132,7 @@ result. The replicas are the primary shard and its replicas within that
shard id group. This means that the more replicas we have, the
better GET scaling we will have.
[float]
[discrete]
[[get-versioning]]
===== Versioning support
@ -262,7 +262,7 @@ HEAD twitter/_doc/0
{es} returns a status code of `200 - OK` if the document exists, or
`404 - Not Found` if it doesn't.
[float]
[discrete]
[[_source]]
===== Get the source field only
@ -294,7 +294,7 @@ HEAD twitter/_source/1
--------------------------------------------------
// TEST[continued]
[float]
[discrete]
[[get-stored-fields]]
===== Get stored fields

View File

@ -219,7 +219,7 @@ the order specified.
<3> Allow automatic creation of any index. This is the default.
[float]
[discrete]
[[operation-type]]
===== Put if absent
@ -228,7 +228,7 @@ setting the `op_type` parameter to _create_. In this case,
the index operation fails if a document with the specified ID
already exists in the index.
[float]
[discrete]
===== Create document IDs automatically
When using the `POST /<target>/_doc/` request format, the `op_type` is
@ -266,7 +266,7 @@ The API returns the following result:
--------------------------------------------------
// TESTRESPONSE[s/W0tpsmIBdwcYyG50zbta/$body._id/ s/"successful": 2/"successful": 1/]
[float]
[discrete]
[[optimistic-concurrency-control-index]]
===== Optimistic concurrency control
@ -276,7 +276,7 @@ term specified by the `if_seq_no` and `if_primary_term` parameters. If a
mismatch is detected, the operation will result in a `VersionConflictException`
and a status code of 409. See <<optimistic-concurrency-control>> for more details.
[float]
[discrete]
[[index-routing]]
===== Routing
@ -308,7 +308,7 @@ value is provided or extracted.
NOTE: Data streams do not support custom routing. Instead, target the
appropriate backing index for the stream.
[float]
[discrete]
[[index-distributed]]
===== Distributed
@ -317,7 +317,7 @@ The index operation is directed to the primary shard based on its route
containing this shard. After the primary shard completes the operation,
if needed, the update is distributed to applicable replicas.
[float]
[discrete]
[[index-wait-for-active-shards]]
===== Active shards
@ -375,14 +375,14 @@ replication succeeded/failed.
--------------------------------------------------
// NOTCONSOLE
[float]
[discrete]
[[index-refresh]]
===== Refresh
Control when the changes made by this request are visible to search. See
<<docs-refresh,refresh>>.
[float]
[discrete]
[[index-noop]]
===== Noop updates
@ -397,7 +397,7 @@ It's a combination of lots of factors like how frequently your data source
sends updates that are actually noops and how many queries per second
Elasticsearch runs on the shard receiving the updates.
[float]
[discrete]
[[timeout]]
===== Timeout
@ -420,7 +420,7 @@ PUT twitter/_doc/1?timeout=5m
}
--------------------------------------------------
[float]
[discrete]
[[index-versioning]]
===== Versioning
@ -466,7 +466,7 @@ a database is simplified if external versioning is used, as only the
latest version will be used if the index operations arrive out of order for
whatever reason.
[float]
[discrete]
[[index-version-types]]
===== Version types

View File

@ -80,7 +80,7 @@ include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=version]
include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=version_type]
[float]
[discrete]
[[docs-multi-termvectors-api-example]]
==== {api-examples-title}

View File

@ -29,7 +29,7 @@ to return.
Take no refresh related actions. The changes made by this request will be made
visible at some point after the request returns.
[float]
[discrete]
==== Choosing which setting to use
// tag::refresh-default[]
Unless you have a good reason to wait for the change to become visible, always
@ -62,7 +62,7 @@ refresh immediately, `refresh=true` will affect other ongoing request. In
general, if you have a running system you don't wish to disturb then
`refresh=wait_for` is a smaller modification.
[float]
[discrete]
[[refresh_wait_for-force-refresh]]
==== `refresh=wait_for` Can Force a Refresh
@ -78,7 +78,7 @@ contain `"forced_refresh": true`.
Bulk requests only take up one slot on each shard that they touch no matter how
many times they modify the shard.
[float]
[discrete]
==== Examples
These will create a document and immediately refresh the index so it is visible:

View File

@ -409,7 +409,7 @@ POST twitter/_update_by_query?pipeline=set-foo
// TEST[setup:twitter]
[float]
[discrete]
[[docs-update-by-query-fetch-tasks]]
===== Get the status of update by query operations
@ -488,7 +488,7 @@ and `wait_for_completion=false` was set on it, then it'll come back with a
you to delete that document.
[float]
[discrete]
[[docs-update-by-query-cancel-task-api]]
===== Cancel an update by query operation
@ -506,7 +506,7 @@ API above will continue to list the update by query task until this task checks
that it has been cancelled and terminates itself.
[float]
[discrete]
[[docs-update-by-query-rethrottle]]
===== Change throttling for a request
@ -527,7 +527,7 @@ query takes effect immediately, but rethrotting that slows down the query will
take effect after completing the current batch. This prevents scroll
timeouts.
[float]
[discrete]
[[docs-update-by-query-manual-slice]]
===== Slice manually
Slice an update by query manually by providing a slice id and total number of
@ -581,7 +581,7 @@ Which results in a sensible `total` like this one:
}
----------------------------------------------------------------
[float]
[discrete]
[[docs-update-by-query-automatic-slice]]
===== Use automatic slicing
@ -651,7 +651,7 @@ being updated.
* Each sub-request gets a slightly different snapshot of the source data stream or index
though these are all taken at approximately the same time.
[float]
[discrete]
[[picking-up-a-new-property]]
===== Pick up a new property

View File

@ -190,7 +190,7 @@ POST test/_update/1
--------------------------------------------------
// TEST[continued]
[float]
[discrete]
===== Update part of a document
The following partial update adds a new field to the
@ -210,7 +210,7 @@ POST test/_update/1
If both `doc` and `script` are specified, then `doc` is ignored. If you
specify a scripted update, include the fields you want to update in the script.
[float]
[discrete]
===== Detect noop updates
By default updates that don't change anything detect that they don't change
@ -263,7 +263,7 @@ POST test/_update/1
// TEST[continued]
[[upserts]]
[float]
[discrete]
===== Upsert
If the document does not already exist, the contents of the `upsert` element
@ -288,7 +288,7 @@ POST test/_update/1
--------------------------------------------------
// TEST[continued]
[float]
[discrete]
[[scripted_upsert]]
===== Scripted upsert
@ -316,7 +316,7 @@ POST sessions/_update/dh3sgudg8gsrgl
// TEST[s/"id": "my_web_session_summariser"/"source": "ctx._source.page_view_event = params.pageViewEvent"/]
// TEST[continued]
[float]
[discrete]
[[doc_as_upsert]]
===== Doc as upsert

View File

@ -15,7 +15,7 @@ You can use EQL in {es} to easily express relationships between events and
quickly match events with shared properties. You can use EQL and query
DSL together to better filter your searches.
[float]
[discrete]
[[eql-advantages]]
=== Advantages of EQL
@ -32,7 +32,7 @@ While you can use EQL for any event-based data, we created EQL for threat
hunting. EQL not only supports indicator of compromise (IOC) searching but
makes it easy to describe activity that goes beyond IOCs.
[float]
[discrete]
[[when-to-use-eql]]
=== When to use EQL
@ -42,7 +42,7 @@ Consider using EQL if you:
* Search time-series data or logs, such as network or system logs
* Want an easy way to explore relationships between events
[float]
[discrete]
[[eql-toc]]
=== In this section

View File

@ -37,7 +37,7 @@ To take {es} for a test drive, you can create a
the {ess} or set up a multi-node {es} cluster on your own
Linux, macOS, or Windows machine.
[float]
[discrete]
[[run-elasticsearch-hosted]]
=== Run {es} on Elastic Cloud
@ -53,7 +53,7 @@ and verify your email address.
Once you've created a deployment, you're ready to <<getting-started-index>>.
[float]
[discrete]
[[run-elasticsearch-local]]
=== Run {es} locally on Linux, macOS, or Windows
@ -226,7 +226,7 @@ privileges are required to run each API, see <<rest-apis>>.
{es} responds to each API request with an HTTP status code like `200 OK`. With
the exception of `HEAD` requests, it also returns a JSON-encoded response body.
[float]
[discrete]
[[gs-other-install]]
=== Other installation options
@ -314,7 +314,7 @@ and shows the original source fields that were indexed.
// TESTRESPONSE[s/"_seq_no" : \d+/"_seq_no" : $body._seq_no/ ]
// TESTRESPONSE[s/"_primary_term" : \d+/"_primary_term" : $body._primary_term/]
[float]
[discrete]
[[getting-started-batch-processing]]
=== Indexing documents in bulk

View File

@ -15,12 +15,12 @@ For additional information about working with the explore API, see the Graph
{kibana-ref}/graph-troubleshooting.html[Troubleshooting] and
{kibana-ref}/graph-limitations.html[Limitations] topics.
[float]
[discrete]
=== Request
`POST <index>/_graph/explore`
[float]
[discrete]
=== Description
An initial request to the `_explore` API contains a seed query that identifies
@ -29,7 +29,7 @@ and connections you want to include in the graph. Subsequent `_explore` requests
enable you to _spider out_ from one more vertices of interest. You can exclude
vertices that have already been returned.
[float]
[discrete]
=== Request Body
[role="child_attributes"]
@ -185,13 +185,13 @@ a maximum number of documents per value for that field. For example:
======
====
// [float]
// [discrete]
// === Authorization
[float]
[discrete]
=== Examples
[float]
[discrete]
[[basic-search]]
==== Basic exploration
@ -289,7 +289,7 @@ to the other as part of exploration. The `doc_count` value indicates how many
documents in the sample set contain this pairing of terms (this is
not a global count for all documents in the index).
[float]
[discrete]
[[optional-controls]]
==== Optional controls
@ -369,7 +369,7 @@ the connection is returned for global consideration.
<8> Restrict which document are considered as you explore connected terms.
[float]
[discrete]
[[spider-search]]
==== Spidering operations

View File

@ -1,7 +1,7 @@
[[tune-for-disk-usage]]
== Tune for disk usage
[float]
[discrete]
=== Disable the features you do not need
By default Elasticsearch indexes and adds doc values to most fields so that they
@ -86,7 +86,7 @@ PUT index
}
--------------------------------------------------
[float]
[discrete]
[[default-dynamic-string-mapping]]
=== Don't use default dynamic string mappings
@ -121,20 +121,20 @@ PUT index
}
--------------------------------------------------
[float]
[discrete]
=== Watch your shard size
Larger shards are going to be more efficient at storing data. To increase the size of your shards, you can decrease the number of primary shards in an index by <<indices-create-index,creating indices>> with fewer primary shards, creating fewer indices (e.g. by leveraging the <<indices-rollover-index,Rollover API>>), or modifying an existing index using the <<indices-shrink-index,Shrink API>>.
Keep in mind that large shard sizes come with drawbacks, such as long full recovery times.
[float]
[discrete]
[[disable-source]]
=== Disable `_source`
The <<mapping-source-field,`_source`>> field stores the original JSON body of the document. If you dont need access to it you can disable it. However, APIs that needs access to `_source` such as update and reindex wont work.
[float]
[discrete]
[[best-compression]]
=== Use `best_compression`
@ -142,19 +142,19 @@ The `_source` and stored fields can easily take a non negligible amount of disk
space. They can be compressed more aggressively by using the `best_compression`
<<index-codec,codec>>.
[float]
[discrete]
=== Force Merge
Indices in Elasticsearch are stored in one or more shards. Each shard is a Lucene index and made up of one or more segments - the actual files on disk. Larger segments are more efficient for storing data.
The <<indices-forcemerge,`_forcemerge` API>> can be used to reduce the number of segments per shard. In many cases, the number of segments can be reduced to one per shard by setting `max_num_segments=1`.
[float]
[discrete]
=== Shrink Index
The <<indices-shrink-index,Shrink API>> allows you to reduce the number of shards in an index. Together with the Force Merge API above, this can significantly reduce the number of shards and segments of an index.
[float]
[discrete]
=== Use the smallest numeric type that is sufficient
The type that you pick for <<number,numeric data>> can have a significant impact
@ -164,7 +164,7 @@ stored in a `scaled_float` if appropriate or in the smallest type that fits the
use-case: using `float` over `double`, or `half_float` over `float` will help
save storage.
[float]
[discrete]
=== Use index sorting to colocate similar documents
When Elasticsearch stores `_source`, it compresses multiple documents at once
@ -178,7 +178,7 @@ to the index. If you enabled <<index-modules-index-sorting,index sorting>>
then instead they are compressed in sorted order. Sorting documents with similar
structure, fields, and values together should improve the compression ratio.
[float]
[discrete]
=== Put fields in the same order in documents
Due to the fact that multiple documents are compressed together into blocks,

View File

@ -1,7 +1,7 @@
[[general-recommendations]]
== General recommendations
[float]
[discrete]
[[large-size]]
=== Don't return large result sets
@ -11,7 +11,7 @@ for workloads that fall into the database domain, such as retrieving all
documents that match a particular query. If you need to do this, make sure to
use the <<request-body-search-scroll,Scroll>> API.
[float]
[discrete]
[[maximum-document-size]]
=== Avoid large documents

View File

@ -1,7 +1,7 @@
[[tune-for-indexing-speed]]
== Tune for indexing speed
[float]
[discrete]
=== Use bulk requests
Bulk requests will yield much better performance than single-document index
@ -16,7 +16,7 @@ cluster under memory pressure when many of them are sent concurrently, so
it is advisable to avoid going beyond a couple tens of megabytes per request
even if larger requests seem to perform better.
[float]
[discrete]
[[multiple-workers-threads]]
=== Use multiple workers/threads to send data to Elasticsearch
@ -36,7 +36,7 @@ Similarly to sizing bulk requests, only testing can tell what the optimal
number of workers is. This can be tested by progressively increasing the
number of workers until either I/O or CPU is saturated on the cluster.
[float]
[discrete]
=== Unset or increase the refresh interval
The operation that consists of making changes visible to search - called a
@ -57,7 +57,7 @@ gets indexed and when it becomes visible, increasing the
<<index-refresh-interval-setting,`index.refresh_interval`>> to a larger value, e.g.
`30s`, might help improve indexing speed.
[float]
[discrete]
=== Disable replicas for initial loads
If you have a large amount of data that you want to load all at once into
@ -71,20 +71,20 @@ If `index.refresh_interval` is configured in the index settings, it may further
help to unset it during this initial load and setting it back to its original
value once the initial load is finished.
[float]
[discrete]
=== Disable swapping
You should make sure that the operating system is not swapping out the java
process by <<setup-configuration-memory,disabling swapping>>.
[float]
[discrete]
=== Give memory to the filesystem cache
The filesystem cache will be used in order to buffer I/O operations. You should
make sure to give at least half the memory of the machine running Elasticsearch
to the filesystem cache.
[float]
[discrete]
=== Use auto-generated ids
When indexing a document that has an explicit id, Elasticsearch needs to check
@ -93,7 +93,7 @@ is a costly operation and gets even more costly as the index grows. By using
auto-generated ids, Elasticsearch can skip this check, which makes indexing
faster.
[float]
[discrete]
=== Use faster hardware
If indexing is I/O bound, you should investigate giving more memory to the
@ -115,7 +115,7 @@ different nodes so there's redundancy for any node failures. You can also use
<<modules-snapshots,snapshot and restore>> to backup the index for further
insurance.
[float]
[discrete]
=== Indexing buffer size
If your node is doing only heavy indexing, be sure
@ -131,7 +131,7 @@ The default is `10%` which is often plenty: for example, if you give the JVM
10GB of memory, it will give 1GB to the index buffer, which is enough to host
two shards that are heavily indexing.
[float]
[discrete]
=== Use {ccr} to prevent searching from stealing resources from indexing
Within a single cluster, indexing and searching can compete for resources. By
@ -140,7 +140,7 @@ one cluster to the other one, and routing all searches to the cluster that has
the follower indices, search activity will no longer steal resources from
indexing on the cluster that hosts the leader indices.
[float]
[discrete]
=== Additional optimizations
Many of the strategies outlined in <<tune-for-disk-usage>> also

View File

@ -4,7 +4,7 @@
The fact that Elasticsearch operates with shards and replicas adds challenges
when it comes to having good scoring.
[float]
[discrete]
==== Scores are not reproducible
Say the same user runs the same request twice in a row and documents do not come
@ -39,7 +39,7 @@ they will be sorted by their internal Lucene doc id (which is unrelated to the
the same shard. So by always hitting the same shard, we would get more
consistent ordering of documents that have the same scores.
[float]
[discrete]
==== Relevancy looks wrong
If you notice that two documents with the same content get different scores or

View File

@ -1,7 +1,7 @@
[[tune-for-search-speed]]
== Tune for search speed
[float]
[discrete]
=== Give memory to the filesystem cache
Elasticsearch heavily relies on the filesystem cache in order to make search
@ -9,7 +9,7 @@ fast. In general, you should make sure that at least half the available memory
goes to the filesystem cache so that Elasticsearch can keep hot regions of the
index in physical memory.
[float]
[discrete]
=== Use faster hardware
If your search is I/O bound, you should investigate giving more memory to the
@ -25,7 +25,7 @@ throttled.
If your search is CPU-bound, you should investigate buying faster CPUs.
[float]
[discrete]
=== Document modeling
Documents should be modeled so that search-time operations are as cheap as possible.
@ -35,7 +35,7 @@ several times slower and <<parent-join,parent-child>> relations can make
queries hundreds of times slower. So if the same questions can be answered without
joins by denormalizing documents, significant speedups can be expected.
[float]
[discrete]
=== Search as few fields as possible
The more fields a <<query-dsl-query-string-query,`query_string`>> or
@ -70,7 +70,7 @@ PUT movies
}
--------------------------------------------------
[float]
[discrete]
=== Pre-index data
You should leverage patterns in your queries to optimize the way data is indexed.
@ -155,13 +155,13 @@ GET index/_search
--------------------------------------------------
// TEST[continued]
[float]
[discrete]
[[map-ids-as-keyword]]
=== Consider mapping identifiers as `keyword`
include::../mapping/types/numeric.asciidoc[tag=map-ids-as-keyword]
[float]
[discrete]
=== Avoid scripts
If possible, avoid using <<modules-scripting,scripts>> or
@ -169,7 +169,7 @@ If possible, avoid using <<modules-scripting,scripts>> or
<<scripts-and-search-speed>>.
[float]
[discrete]
=== Search rounded dates
Queries on date fields that use `now` are typically not cacheable since the
@ -284,7 +284,7 @@ However such practice might make the query run slower in some cases since the
overhead introduced by the `bool` query may defeat the savings from better
leveraging the query cache.
[float]
[discrete]
=== Force-merge read-only indices
Indices that are read-only may benefit from being <<indices-forcemerge,merged
@ -299,7 +299,7 @@ background merge process to perform merges as needed to keep the index running
smoothly. If you continue to write to a force-merged index then its performance
may become much worse.
[float]
[discrete]
=== Warm up global ordinals
Global ordinals are a data-structure that is used in order to run
@ -325,7 +325,7 @@ PUT index
}
--------------------------------------------------
[float]
[discrete]
=== Warm up the filesystem cache
If the machine running Elasticsearch is restarted, the filesystem cache will be
@ -339,14 +339,14 @@ WARNING: Loading data into the filesystem cache eagerly on too many indices or
too many files will make search _slower_ if the filesystem cache is not large
enough to hold all the data. Use with caution.
[float]
[discrete]
=== Use index sorting to speed up conjunctions
<<index-modules-index-sorting,Index sorting>> can be useful in order to make
conjunctions faster at the cost of slightly slower indexing. Read more about it
in the <<index-modules-index-sorting-conjunctions,index sorting documentation>>.
[float]
[discrete]
[[preference-cache-optimization]]
=== Use `preference` to optimize cache utilization
@ -364,7 +364,7 @@ one after another, for instance in order to analyze a narrower subset of the
index, using a preference value that identifies the current user or session
could help optimize usage of the caches.
[float]
[discrete]
=== Replicas might help with throughput, but not always
In addition to improving resiliency, replicas can help improve throughput. For

View File

@ -8,7 +8,7 @@
Index Modules are modules created per index and control all aspects related to
an index.
[float]
[discrete]
[[index-modules-settings]]
== Index Settings
@ -31,7 +31,7 @@ WARNING: Changing static or dynamic index settings on a closed index could
result in incorrect settings that are impossible to rectify without deleting
and recreating the index.
[float]
[discrete]
=== Static index settings
Below is a list of all _static_ index settings that are not associated with any
@ -88,7 +88,7 @@ indices.
per request through the use of the `expand_wildcards` parameter. Possible values are
`true` and `false` (default).
[float]
[discrete]
[[dynamic-index-settings]]
=== Dynamic index settings
@ -238,7 +238,7 @@ specific index module:
the default pipeline (if it exists). The special pipeline name `_none`
indicates no ingest pipeline will run.
[float]
[discrete]
=== Settings in other index modules
Other index settings are available in index modules:
@ -285,7 +285,7 @@ Other index settings are available in index modules:
Configure indexing back pressure limits.
[float]
[discrete]
[[x-pack-index-settings]]
=== [xpack]#{xpack} index settings#

View File

@ -21,7 +21,7 @@ For example, you could use a custom node attribute to indicate a node's
performance characteristics and use shard allocation filtering to route shards
for a particular index to the most appropriate class of hardware.
[float]
[discrete]
[[index-allocation-filters]]
==== Enabling index-level shard allocation filtering
@ -74,7 +74,7 @@ PUT test/_settings
// TEST[s/^/PUT test\n/]
--
[float]
[discrete]
[[index-allocation-settings]]
==== Index allocation filter settings

View File

@ -56,7 +56,7 @@ copying just the missing operations from the translog
<<index-modules-translog-retention,as long as those operations are retained
there>>. {ccr-cap} will not function if soft deletes are disabled.
[float]
[discrete]
=== History retention settings
`index.soft_deletes.enabled`::

View File

@ -100,7 +100,7 @@ a sort on an existing index. Index sorting also has a cost in terms of indexing
documents must be sorted at flush and merge time. You should test the impact on your application
before activating this feature.
[float]
[discrete]
[[early-terminate]]
=== Early termination of search request

View File

@ -10,7 +10,7 @@ deletes.
The merge process uses auto-throttling to balance the use of hardware
resources between merging and other activities like search.
[float]
[discrete]
[[merge-scheduling]]
=== Merge scheduling

View File

@ -9,7 +9,7 @@ Configuring a custom similarity is considered an expert feature and the
builtin similarities are most likely sufficient as is described in
<<similarity>>.
[float]
[discrete]
[[configuration]]
=== Configuring a similarity
@ -52,10 +52,10 @@ PUT /index/_mapping
--------------------------------------------------
// TEST[continued]
[float]
[discrete]
=== Available similarities
[float]
[discrete]
[[bm25]]
==== BM25 similarity (*default*)
@ -80,7 +80,7 @@ This similarity has the following options:
Type name: `BM25`
[float]
[discrete]
[[dfr]]
==== DFR similarity
@ -110,7 +110,7 @@ All options but the first option need a normalization value.
Type name: `DFR`
[float]
[discrete]
[[dfi]]
==== DFI similarity
@ -130,7 +130,7 @@ frequency will get a score equal to 0.
Type name: `DFI`
[float]
[discrete]
[[ib]]
==== IB similarity.
@ -151,7 +151,7 @@ This similarity has the following options:
Type name: `IB`
[float]
[discrete]
[[lm_dirichlet]]
==== LM Dirichlet similarity.
@ -167,7 +167,7 @@ Lucene, so such terms get a score of 0.
Type name: `LMDirichlet`
[float]
[discrete]
[[lm_jelinek_mercer]]
==== LM Jelinek Mercer similarity.
@ -180,7 +180,7 @@ for title queries and `0.7` for long queries. Default to `0.1`. When value appro
Type name: `LMJelinekMercer`
[float]
[discrete]
[[scripted_similarity]]
==== Scripted similarity
@ -508,7 +508,7 @@ GET /index/_search?explain=true
Type name: `scripted`
[float]
[discrete]
[[default-base]]
==== Default Similarity

View File

@ -1,7 +1,7 @@
[[index-modules-slowlog]]
== Slow Log
[float]
[discrete]
[[search-slow-log]]
=== Search Slow Log
@ -82,7 +82,7 @@ logger.index_search_slowlog_rolling.appenderRef.index_search_slowlog_rolling.ref
logger.index_search_slowlog_rolling.additivity = false
--------------------------------------------------
[float]
[discrete]
==== Identifying search slow log origin
It is often useful to identify what triggered a slow running query. If a call was initiated with an `X-Opaque-ID` header, then the user ID
@ -117,7 +117,7 @@ The user ID is also included in JSON logs.
---------------------------
// NOTCONSOLE
[float]
[discrete]
[[index-slow-log]]
=== Index Slow log

View File

@ -7,7 +7,7 @@ NOTE: This is a low-level setting. Some store implementations have poor
concurrency or disable optimizations for heap memory usage. We recommend
sticking to the defaults.
[float]
[discrete]
[[file-system]]
=== File system storage types

View File

@ -22,7 +22,7 @@ would make replaying its operations take a considerable amount of time during
recovery. The ability to perform a flush manually is also exposed through an
API, although this is rarely needed.
[float]
[discrete]
=== Translog settings
The data in the translog is only persisted to disk when the translog is
@ -76,7 +76,7 @@ update, or bulk request. This setting accepts the following parameters:
has been reached a flush will happen, generating a new Lucene commit point.
Defaults to `512mb`.
[float]
[discrete]
[[index-modules-translog-retention]]
==== Translog retention

View File

@ -4,7 +4,7 @@
Index APIs are used to manage individual indices,
index settings, aliases, mappings, and index templates.
[float]
[discrete]
[[index-management]]
=== Index management:
@ -23,7 +23,7 @@ index settings, aliases, mappings, and index templates.
* <<indices-resolve-index-api>>
[float]
[discrete]
[[mapping-management]]
=== Mapping management:
@ -32,7 +32,7 @@ index settings, aliases, mappings, and index templates.
* <<indices-get-field-mapping>>
* <<indices-types-exists>>
[float]
[discrete]
[[alias-management]]
=== Alias management:
* <<indices-add-alias>>
@ -41,14 +41,14 @@ index settings, aliases, mappings, and index templates.
* <<indices-alias-exists>>
* <<indices-aliases>>
[float]
[discrete]
[[index-settings]]
=== Index settings:
* <<indices-update-settings>>
* <<indices-get-settings>>
* <<indices-analyze>>
[float]
[discrete]
[[index-templates]]
=== Index templates:
* <<indices-templates>>
@ -61,7 +61,7 @@ index settings, aliases, mappings, and index templates.
* <<indices-simulate-index>>
* <<indices-simulate-template>>
[float]
[discrete]
[[monitoring]]
=== Monitoring:
* <<indices-stats>>
@ -69,7 +69,7 @@ index settings, aliases, mappings, and index templates.
* <<indices-recovery>>
* <<indices-shards-stores>>
[float]
[discrete]
[[status-management]]
=== Status management:
* <<indices-clearcache>>
@ -78,7 +78,7 @@ index settings, aliases, mappings, and index templates.
* <<indices-synced-flush-api>>
* <<indices-forcemerge>>
[float]
[discrete]
[[dangling-indices-api]]
=== Dangling indices:
* <<dangling-indices-list>>

View File

@ -118,7 +118,7 @@ The enrich processor works best with reference data
that doesn't change frequently.
====
[float]
[discrete]
[[enrich-prereqs]]
==== Prerequisites

View File

@ -26,7 +26,7 @@ order.
The processors in a pipeline have read and write access to documents that pass through the pipeline.
The processors can access fields in the source of a document and the document's metadata fields.
[float]
[discrete]
[[accessing-source-fields]]
=== Accessing Fields in the Source
Accessing a field in the source is straightforward. You simply refer to fields by
@ -56,7 +56,7 @@ On top of this, fields from the source are always accessible via the `_source` p
--------------------------------------------------
// NOTCONSOLE
[float]
[discrete]
[[accessing-metadata-fields]]
=== Accessing Metadata Fields
You can access metadata fields in the same way that you access fields in the source. This
@ -78,7 +78,7 @@ The following example sets the `_id` metadata field of a document to `1`:
The following metadata fields are accessible by a processor: `_index`, `_type`, `_id`, `_routing`.
[float]
[discrete]
[[accessing-ingest-metadata]]
=== Accessing Ingest Metadata Fields
Beyond metadata fields and source fields, ingest also adds ingest metadata to the documents that it processes.
@ -106,7 +106,7 @@ Unlike Elasticsearch metadata fields, the ingest metadata field name `_ingest` c
in the source of a document. Use `_source._ingest` to refer to the field in the source document. Otherwise, `_ingest`
will be interpreted as an ingest metadata field.
[float]
[discrete]
[[accessing-template-fields]]
=== Accessing Fields and Metafields in Templates
A number of processor settings also support templating. Settings that support templating can have zero or more
@ -751,7 +751,7 @@ continues to execute, which in this case means that the pipeline does nothing.
The `ignore_failure` can be set on any processor and defaults to `false`.
[float]
[discrete]
[[accessing-error-metadata]]
=== Accessing Error Metadata From Processors Handling Exceptions
@ -853,7 +853,7 @@ A node will not start if this plugin is not available.
The <<cluster-nodes-stats,node stats API>> can be used to fetch ingest usage statistics, globally and on a per
pipeline basis. Useful to find out which pipelines are used the most or spent the most time on preprocessing.
[float]
[discrete]
=== Ingest Processor Plugins
Additional ingest processors can be implemented and installed as Elasticsearch {plugins}/intro.html[plugins].

View File

@ -102,7 +102,7 @@ https://www.elastic.co/guide/en/elasticsearch/client/index.html[{es} client]
for your language of choice: Java, JavaScript, Go, .NET, PHP, Perl, Python
or Ruby.
[float]
[discrete]
[[search-data]]
==== Searching your data
@ -127,7 +127,7 @@ construct <<sql-overview, SQL-style queries>> to search and aggregate data
natively inside {es}, and JDBC and ODBC drivers enable a broad range of
third-party applications to interact with {es} via SQL.
[float]
[discrete]
[[analyze-data]]
==== Analyzing your data
@ -159,7 +159,7 @@ size 70 needles, youre displaying a count of the size 70 needles
that match your users' search criteria--for example, all size 70 _non-stick
embroidery_ needles.
[float]
[discrete]
[[more-features]]
===== But wait, theres more
@ -206,7 +206,7 @@ The number of primary shards in an index is fixed at the time that an index is
created, but the number of replica shards can be changed at any time, without
interrupting indexing or query operations.
[float]
[discrete]
[[it-depends]]
==== It depends...
@ -234,7 +234,7 @@ The best way to determine the optimal configuration for your use case is
through https://www.elastic.co/elasticon/conf/2016/sf/quantitative-cluster-sizing[
testing with your own data and queries].
[float]
[discrete]
[[disaster-ccr]]
==== In case of disaster
@ -254,7 +254,7 @@ create secondary clusters to serve read requests in geo-proximity to your users.
the active leader index and handles all write requests. Indices replicated to
secondary clusters are read-only followers.
[float]
[discrete]
[[admin]]
==== Care and feeding

View File

@ -8,26 +8,26 @@
This API enables you to delete licensing information.
[float]
[discrete]
==== Request
`DELETE /_license`
[float]
[discrete]
==== Description
When your license expires, {xpack} operates in a degraded mode. For more
information, see
{kibana-ref}/managing-licenses.html#license-expiration[License expiration].
[float]
[discrete]
==== Authorization
You must have `manage` cluster privileges to use this API.
For more information, see
<<security-privileges>>.
[float]
[discrete]
==== Examples
The following example queries the info API:

Some files were not shown because too many files have changed in this diff Show More